0% found this document useful (0 votes)
387 views5 pages

Constant Composition Distribution Matching

CCDM Encoder

Uploaded by

Shujja Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
387 views5 pages

Constant Composition Distribution Matching

CCDM Encoder

Uploaded by

Shujja Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

430 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 62, NO.

1, JANUARY 2016

Constant Composition Distribution Matching


Patrick Schulte and Georg Böcherer, Member, IEEE

Abstract— Distribution matching transforms independent and The author of [14, Sec. 4.8] suggests to concatenate short
Bernoulli(1/2) distributed input bits into a sequence of output codes and Mondelli et al. [4] employ a forward error
symbols with a desired distribution. Fixed-to-fixed length, invert- correction decoder to build an f2f length matcher. The
ible, and low complexity encoders and decoders based on constant
composition and arithmetic coding are presented. The encoder dematchers of [4] and [14] cannot always recover the
achieves the maximum rate, namely, the entropy of the desired input sequence with zero error. Hence systematic errors are
distribution, asymptotically in the blocklength. Furthermore, the introduced that cannot be corrected by the error correction
normalized divergence of the encoder output and the desired code or by retransmission. The thesis [15] proposes an
distribution goes to zero in the blocklength. invertible f2f length distribution matcher called adaptive
Index Terms— Distribution matching, fixed length, arithmetic arithmetic distribution matcher (aadm). The algorithm is
coding, asymptotically optimal algorithm. computationally complex.
In this work we propose practical, invertible, f2f length
I. I NTRODUCTION distribution matchers. They are asymptotically optimal and are
based on constant composition codes indexed by arithmetic
A DISTRIBUTION MATCHER transforms independent
Bernoulli( 12 ) distributed input bits into output symbols
with a desired distribution. We measure the distance between
coding. The paper is organized as follows. In Section II
we formally define distribution matching. We analyze con-
the matcher output distribution and the desired distribution by stant composition codes in Section III. In Section IV we
normalized informational divergence [1, p. 7]. Informational show how a constant composition distribution matcher (ccdm)
divergence is also known as Kullback-Leibler divergence or and dematcher can be implemented efficiently by arithmetic
relative entropy [2, Sec. 2.3]. A dematcher performs the coding.
inverse operation and recovers the input bits from the output
symbols. A distribution matcher is a building block of the II. P ROBLEM S TATEMENT
bootstrap scheme [3] that achieves the capacity of arbitrary The entropy of a discrete random variable A with alphabet A
discrete memoryless channels [4]. Distribution matchers are and distribution PA is
used in [5, Sec. VI] for rate adaptation and in [6] to achieve 
the capacity of the additive white Gaussian noise channel. H (A) = −PA (a) log2 PA (a) (1)
Prefix-free distribution matching was proposed in a∈supp( PA )
[7, Sec. IV.A]. In [8] and [9], Huffman codes are used where supp(PA ) ⊆ A is the support of PA . The informational
for matching. Optimal variable-to-fixed and fixed-to-variable divergence of two distributions on A is
length distribution matchers are proposed in [10] and [11],
   P (a)
respectively. The codebooks of the matchers in [8]–[11] D PÂ ||PA = PÂ (a) log2 Â . (2)
must be generated offline and stored. This is infeasible for PA (a)
a∈supp( PÂ )
large codeword lengths, which are necessary to achieve the
maximum rate. This problem is solved in [12] and [13] by The normalized informational divergence for length n random
using arithmetic coding to calculate the codebook online. vectors Ân = Â1 . . .Ân and An is defined as
The matchers proposed in [12] and [13] are asymptotically  
D PÂn ||PAn
optimal. All approaches [8]–[13] are variable length, which . (3)
can lead to varying transmission rate, large buffer sizes, n
error propagation and synchronization problems [8, Sec. I]. For random vectors with independent and identically distrib-
Fixed-to-fixed (f2f) length codes do not have these issues. uted (iid) entries, we write

Manuscript received March 18, 2015; revised August 10, 2015; accepted

n

September 18, 2015. Date of publication November 9, 2015; date of current


PAn (a n ) = PA (ai ). (4)
version December 18, 2015. This work was supported by the German i=1
Federal Ministry of Education and Research in the framework of an
Alexander von Humboldt Professorship. A one-to-one f2f distribution matcher is an invertible func-
The authors are with the Institute for Communications Engineer-
ing, Technische Universität München, Munich 80333, Germany (e-mail:
tion f . We denote the inverse function by f −1 . The mapping
 
[email protected]; [email protected]). imitates a desired distribution PA by mapping m Bernoulli 12
Communicated by E. Tuncel, Associate Editor for Source Coding. distributed bits Bm to length n strings Ãn = f (B m ) ∈ An .
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org. The output distribution is PÃn . The concept of one-to-one f2f
Digital Object Identifier 10.1109/TIT.2015.2499181 distribution matching is illustrated in Fig. 1.
0018-9448 © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Authorized licensed use limited to: NUST School of Electrical Engineering and Computer Science (SEECS). Downloaded on October 01,2023 at 15:52:20 UTC from IEEE Xplore. Restrictions apply.
SCHULTE AND BÖCHERER: CONSTANT COMPOSITION DISTRIBUTION MATCHING 431

The solution of (10) can be found efficiently by


[18, Algorithm 2]. This allocation provides a clear rule
for choosing the n a and is convenient for analysis. Suppose
the output length n is fixed and that we can choose the input
length m. Let T Pn be the set of vectors of type PĀ , i.e., we

have

n a (v)
T P = v v ∈ An ,
n
= PĀ (a) ∀a ∈ A . (11)
Fig. 1. Matching a data block Bm = B1 . . .Bm to output symbols Ãn = Ā n
Ã1 . . . Ãn and
 reconstructing
 the original sequence at the dematcher. The
rate is m bits . The matcher can be interpreted as emulating a
The matcher is invertible, so we need at least as many code-
n output symbol
discrete memoryless source PA , i.e., PÃn and PAn are close in informational
words as input blocks. The input blocklength must thus not
divergence. exceed log2 |T Pn |. We set the input length to m = log2 |T Pn |
Ā Ā
and we define the encoding function

Definition 1: A matching rate R = m/n is achievable for a fccdm : {0, 1}m → T Pn . (12)

distribution PA if for any α > 0 and sufficiently large n there
is an invertible mapping f : {0, 1}m → An for which The actual mapping fccdm can be implemented efficiently by
  arithmetic coding, as we will show in Section IV. The constant
D P f (Bm ) ||PAn composition codebook is now given by the image of fccdm , i.e.,
≤ α. (5)
n
Cccdm = fccdm ({0, 1}m ). (13)
The following proposition in [16] relates the rate R and (5).
Proposition 1 (Converse, [16, Proposition 8]): There exists Since fccdm is invertible, the codebook size is |Cccdm | = 2m .
a positive-valued function β with
α→0 B. Analysis
β(α) −→ 0 (6)
We show that fccdm asymptotically achieves all rates satis-
such that (5) implies fying (8). We can bound m by
m H (A)
≤ + β(α). (7) m = log2 |T Pn | ≥ log2 |T Pn | − 1. (14)
n H (B) Ā Ā

Proposition 1 bounds the maximum rate that can be achieved Recall that the matcher output distribution is PÃn . We have
under condition (5). Since H (B) = 1 we have
   2−m PĀ (a )
n n

R ≤ H (A) (8) D PÃn ||PAn = 2−m log2


PAn (a n ) P n (a n )
a n ∈Cccdm ⊆T Pn Ā

for any achievable rate R.   P n (a n )
−m Ā
=D PÃn ||PĀn + 2 log2
PAn (a n )
III. C ONSTANT C OMPOSITION D ISTRIBUTION M ATCHING a n ∈Cccdm ⊆T Pn

The empirical distribution of a vector c of length n is   PĀ (a)
−m
=D PÃn ||PĀn + |Cccdm |2 n a log2
defined as PA (a)
 a∈A
n a (c) 
PĀ,c (a) = (9) =D PÃn ||PĀn +n D PĀ ||PA . (15)
n      
Term 2
where n a (c) = |{i : ci = a}| is the number of times symbol Term 1
a appears in c. The authors of [17, Sec. 2.1] call PĀ,c the For Term 1 we obtain
type of c. An n-type is a type based on a length n sequence. 
  2−m
A codebook Cccdm ⊆ An is called a constant composition code D PÃn ||PAn = 2−m log2 
if all codewords are of the same type, i.e., n a (c) does not PĀ (i )ni
a n ∈Cccdm ⊆T Pn
i∈A
depend on the codeword c. We will write n a in place of n a (c) Ā
 2−m
for a constant composition code. = 2−m log2
Cccdm 2−nH(Ā)
A. Approach = nH(Ā) − m. (16)
We use a constant composition code with n a ≈ PA · n.
Using (16) in (15) and dividing by n we have
As all n a need to be integers and add up to n, there are multiple  
possibilities to choose the n a . We use the allocation that solves D PÃn ||PAn  
  = H(Ā) − R + D PĀ ||PA . (17)
PĀ = argmin D PĀ ||PA n
PĀ The choice (10) of PĀ minimizes the third term on the right-
subject to PĀ is n-type. (10) hand side of (17) and guarantees (see [18, Proposition 4])

Authorized licensed use limited to: NUST School of Electrical Engineering and Computer Science (SEECS). Downloaded on October 01,2023 at 15:52:20 UTC from IEEE Xplore. Restrictions apply.
432 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 62, NO. 1, JANUARY 2016

Fig. 2. Rates of ccdm versus output blocklength for PA =


(0.0722, 0.1654, 0.3209, 0.4415). Fig. 3. Normalized divergence of ccdm versus output blocklength for
PA = (0.0722, 0.1654, 0.3209, 0.4415). For comparison, the performance of
that optimal f2f [14, Sec. 4.4] and aadm [15] is displayed. Because of limited
⎛ ⎞ computational resources, we could calculate the performance of optimal f2f
only up to a blocklength of n = 90.
  ⎜ k ⎟
D PĀ ||PA < log2 ⎝1 + ⎠ (18)
min PA (a)n 2
a∈supp PA

where k = |A| is the alphabet size. Consequently, we know


that this term vanishes as the blocklength approaches infinity,
i.e., we have
 
lim D PĀ ||PA = 0. (19)
n→∞
We now relate the input and output lengths to understand the
asymptotic behavior of the rate. By [17, Lemma 2.2], we have
 
n + k − 1 −1 nH(Ā) Fig. 4. Diagram of a constant composition arithmetic encoder with
|T Pn | ≥ 2 ≥ (n + k − 1)−(k−1) 2nH(Ā) . PĀ (0) = PĀ (1) = 0.5, m = 2 and n = 4.
Ā k−1
(20)
matcher [14, Sec. 4.4], respectively. The empirical
Taking the logarithm to the base 2 and dividing by n we have performance of aadm [15] is also displayed. For optimal
log2 |T Pn | f2f and aadm, the rate is fixed to H(A) bits per symbol.
−(k − 1) log2 (n + k − 1) Fig. 3 shows that the ccdm needs about 160 symbols to reach

≥ + H(Ā). (21)
n n an informational divergence of 0.06 bits per symbol, which
For the rate, we obtain is about 4 times the blocklength of the optimal scheme.
However, the memory for storing the optimal codebook
m (14) log2 |T PĀ | 1
n
R= ≥ − grows exponentially in m as there are 2m codewords. In
n n n this example, ccdm performs better than aadm for short
(21) −(k − 1) log2 (n + k − 1) 1
≥ + H(Ā) − (22) blocklength up to 100 symbols. Fig. 2 also shows the lower
n n and upper bounds (8) and (22), respectively.
and in the asymptotic case
lim R = H(Ā). (23) IV. A RITHMETIC C ODING
n→∞
We use arithmetic coding for indexing sequences efficiently.
From (19) and [16, Proposition 6] we know that H(Ā) → In [19] an m-out-of-n coding scheme was introduced that uses
H (A) and by (19) and (23) in (17), normalized divergence the same technique.
approaches zero for n → ∞. Our arithmetic encoder associates an interval to each input
Example 1: The desired distribution is sequence in {0, 1}m and it associates an interval to each
output sequence in T Pn , see Fig. 4 for an example. The
PA = (0.0722, 0.1654, 0.3209, 0.4415). Ā
size of an interval is equal to the probability of the corre-
Fig. 2 and Fig. 3 show the rates and normalized sponding sequence according to the input and output model,
divergences of ccdm and the optimal f2f length respectively. For the input model we choose an iid

Authorized licensed use limited to: NUST School of Electrical Engineering and Computer Science (SEECS). Downloaded on October 01,2023 at 15:52:20 UTC from IEEE Xplore. Restrictions apply.
SCHULTE AND BÖCHERER: CONSTANT COMPOSITION DISTRIBUTION MATCHING 433

 
Bernoulli 12 process. We describe the output model by a
random vector
Ān = Ā1 Ā2 . . . Ān (24)
with marginals PĀi = PĀ and the uniform distribution
1
PĀn (a n ) = ∀a n ∈ T Pn .
|T Pn | Ā

The intervals are ordered lexicographically. All input and


Fig. 5. Refinement of the output intervals. Round brackets indicate symbols
output intervals range from 0 to 1 because all probabilities that must follow with probability one.
add up to 1.
Example 2: Fig. 4 shows input and output intervals with
output length n = 4 and PĀ (0) = PĀ (1) = 0.5. There are The distribution of the first drawn symbol is PĀ1 (0) =
4 equally probable input sequences and 6 equally probable PĀ1 (1) = 21 . When drawing a ‘0’, there are 3 symbols
output sequences. The intervals on the input side are [0, 0.25), remaining: one ‘0’ and two ‘1’s. Thus, the probability for a ‘0’
[0.25, 0.5), [0.5, 0.75) and [0.75, 1). The intervals on the reduces to 1/3 while the probability of ‘1’ is 2/3. If two ‘0’s
output side are [0, 16 ), [ 16 , 26 ), [ 26 , 36 ), [ 36 , 46 ), [ 46 , 56 ) and [ 56 , 1).1 were picked, two ‘1’s must follow. This way we ensure
The arithmetic encoder can link an output sequence to an that the encoder output is of the desired type. Observe that
input sequence if the lower border of the output interval is the probabilities of the next symbol depend on the previous
inside the input interval. In the example (Fig. 4) ‘00’ may link symbols, e.g., we have
to both ‘0101’ and ‘0011’, while for ‘01’ only a link to ‘0110’
is possible. There are at most two possible choices because PĀ2 |Ā1 (0|0) = PĀ2 |Ā1 (0|1) (25)
by (14) the input interval size is less than twice the output  n
in general. However, PĀn (a n ) = i=1 PĀi |Āi−1 (ai |a
i−1 ) is
interval size. Both choices are valid and we can perform an constant on T P as we show in the following proposition.
n
inverse operation. In our implementation, the encoder decides Ā

for the output sequence with the lowest interval border. As a Proposition 2: After n refinements of the output interval the
result, the codebook Cccdm of Example 2 is {‘0011’, ‘0110’, model used for the refinement step stated above creates equally
‘1001’, ‘1100’}. In general Cccdm has cardinality 2m with spaced (equally probable) intervals that are labeled with all
2m ≤ |T Pn | < 2m+1 according to (14). It is not possible sequences in T Pn .


to index the whole set T Pn unless 2m = |T Pn |. The analy- Proof: All symbols in the bag are chosen at some
Ā Ā point. Consequently only sequences in T Pn may appear.
sis of the code (Section III-B) is valid for all codebooks Ā

Cccdm ⊆ T Pn . The actual subset is implicitly defined by the All possibilities associated with the chosen string are products

arithmetic encoder. of fractions n a /n, where n takes on all values from the initial
We now discuss the online algorithm that processes the input value to 1 because every symbol is drawn at some point. Thus
sequentially. Initially, the input interval spans from 0 to 1. for each string we obtain for its probability an expression that
 is independent of the realization itself:
As the input model is Bernoulli 12 we split the interval into
two equally sized intervals and continue with the upper interval n a=0 ! · · · n a=k−1 ! 1
PĀn (a n ) = = n ∀a n ∈ T Pn . (26)
in case the first input bit is ‘1’; otherwise we continue with the n! |T P | Ā

lower interval. After the next input bit arrives we repeat the last
step. After m input bits we reach a size 2−m interval. After 
every refinement of the input interval the algorithm checks Numerical problems for representing the input interval and the
for a sure prefix of the output sequence, e.g., in Fig. 4 we output interval occur after a certain number of input bits. For
see that if the input starts with 1 the output must start with 1. this reason we introduce a rescaling each time a new output
Every time we extend the sure prefix by a new symbol, we symbol is known. We explain this next.
must calculate the probability of the next symbol given the
sure prefix. That means we determine the output intervals A. Scaling Input and Output Intervals
within the sure interval of the prefix. The model for calculating After we identify a sure prefix, we are no longer interested
the conditioned probabilities is based on drawing without in code sequences that do not have that prefix. We scale
replacement. There is a bag with n symbols of k discriminable the input and output interval such that the output interval
kinds. n a denotes how many symbols of kind a are initially in is [0,1). Fig. 6 illustrates the mapping of intervals (in1 , out1 )
the bag and n a is the current number. The probability to draw a to (in2 , out2 ). The refinement for the second symbol works as
symbol of type a is n a /n. If we pick a symbol a both n and n a described in Example 3. If the second input bit is 0, we know
decrement by 1. that 10 must be a prefix of the output. The resulting scaling is
Example 3: Fig. 5 shows a refinement of the output inter- shown in Fig. 6 as (in2 , out2 ) to (in3 , out3). A more detailed
vals. Initially there are 2 ‘0’s and 2 ‘1’s in the bag. explanation of scaling for arithmetic coding can be found for
1 Please note that in this case no distribution matcher is needed. However, instance in [20, Ch. 4]. We provide an implementation of ccdm
this indexing problem is of interest in its own right. online [21].

Authorized licensed use limited to: NUST School of Electrical Engineering and Computer Science (SEECS). Downloaded on October 01,2023 at 15:52:20 UTC from IEEE Xplore. Restrictions apply.
434 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 62, NO. 1, JANUARY 2016

[7] J. G. Forney, R. G. Gallager, G. Lang, F. M. Longstaff, and


S. U. Qureshi, “Efficient modulation for band-limited channels,” IEEE
J. Sel. Areas Commun., vol. 2, no. 5, pp. 632–647, Sep. 1984.
[8] F. R. Kschischang and S. Pasupathy, “Optimal nonuniform signaling
for Gaussian channels,” IEEE Trans. Inf. Theory, vol. 39, no. 3,
pp. 913–929, May 1993.
[9] G. Ungerboeck, “Huffman shaping,” in Codes, Graphs, and Systems,
R. E. Blahut and R. Koetter, Eds. New York, NY, USA: Springer-Verlag,
2002, ch. 17, pp. 299–313.
[10] G. Böcherer and R. Mathar, “Matching dyadic distributions to channels,”
in Proc. Data Compress. Conf., Snowbird, UT, USA, Mar. 2011,
pp. 23–32.
Fig. 6. Scaling of input and output intervals in case the input interval is [11] R. A. Amjad and G. Böcherer, “Fixed-to-variable length distribution
a subset of an output interval. The latter interval corresponds to [0, 1) after matching,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Istanbul, Turkey,
scaling. A star indicates that this is just a prefix of the complete word. Round Jul. 2013, pp. 1511–1515.
brackets indicate symbols that must follow with probability one. [12] N. Cai, S.-W. Ho, and R. W. Yeung, “Probabilistic capacity and
optimal coding for asynchronous channel,” in Proc. IEEE Inf. Theory
Workshop (ITW), Lake Tahoe, CA, USA, Sep. 2007, pp. 54–59.
[13] S. Baur and G. Böcherer, “Arithmetic distribution matching,” in Proc.
V. C ONCLUSION 10th Int. ITG Conf. Syst., Commun., Coding, Hamburg, Germany,
We presented a practical and invertible f2f length distribu- Feb. 2015, pp. 1–6.
[14] R. A. Amjad, “Algorithms for simulation of discrete memoryless
tion matcher that achieves the maximum rate asymptotically in sources,” M.S. thesis, Inst. Commun. Eng., Technische Universität
the blocklength. In contrast to matchers proposed in the litera- München, Munich, Germany, 2013.
ture [8]–[13] the f2f matcher is robust to synchronization and [15] P. Schulte, “Zero error fixed length distribution matching,” M.S. thesis,
Inst. Commun. Eng., Technische Universität München, Munich,
variable rate problems. In future work we plan to investigate Germany, 2014.
f2f length codes that perform well in the finite blocklength [16] G. Böcherer and R. A. Amjad, “Informational divergence and entropy
regime. rate on rooted trees with probabilities,” in Proc. IEEE Int. Symp. Inf.
Theory (ISIT), Honolulu, HI, USA, Jun./Jul. 2014, pp. 176–180.
[17] I. Csiszár and P. C. Shields, “Information theory and statistics:
ACKNOWLEDGMENT A tutorial,” Found. Trends Commun. Inf. Theory, vol. 1, no. 4,
pp. 417–528, 2004.
The authors wish to thank Irina Bocharova and [18] G. Böcherer and B. C. Geiger. (Mar. 2015). “Optimal quantization
Boris Kudryashov for encouraging them to work on the for distribution synthesis.” [Online]. Available: http://arxiv.org/abs/
1307.6843
presented approach. [19] T. V. Ramabadran, “A coding scheme for m-out-of-n codes,” IEEE
Trans. Commun., vol. 38, no. 8, pp. 1156–1163, Aug. 1990.
R EFERENCES [20] K. Sayood, Introduction to Data Compression. Amsterdam,
The Netherlands: Elsevier, 2006.
[1] I. Csiszár and J. Körner, Information Theory: Coding [21] A Fixed-to-Fixed Length Distribution Matcher in C/MATLAB. [Online].
Theorems for Discrete Memoryless Systems. Cambridge, U.K.: Available: http://beam.to/ccdm, accessed Nov. 12, 2015.
Cambridge Univ. Press, 2011.
[2] T. M. Cover and J. A. Thomas, Elements of Information Theory, 2nd ed.
New York, NY, USA: Wiley, 2006. Patrick Schulte received the B.S. and M.S. degree in Electrical Engineering
[3] G. Böcherer and R. Mathar, “Operating LDPC codes with zero shaping both from Technische Universität München, Germany in 2012 and 2014,
gap,” in Proc. IEEE Inf. Theory Workshop (ITW), Paraty, Brazil, respectively. Since 2014, he is a doctoral student at the Institute of Communi-
Oct. 2011, pp. 330–334. cations Engineering of Technische Universität München. His current research
[4] M. Mondelli, R. Urbanke, and S. H. Hassani, “How to achieve the interests are iterative channel coding and probabilistic shaping.
capacity of asymmetric channels,” in Proc. 52nd Annu. Allerton Conf.
Commun., Control, Comput., Monticello, IL, USA, Sep./Oct. 2014,
pp. 789–796. Georg Böcherer (S’05–M’14) was born in Freiburg im Breisgau, Germany.
[5] D. J. C. MacKay, “Good error-correcting codes based on very sparse He obtained his M.Sc. degree in Electrical Engineering and Information Tech-
matrices,” IEEE Trans. Inf. Theory, vol. 45, no. 2, pp. 399–431, nology from the ETH Zürich in 2007, and his Ph.D. degree from the RWTH
Mar. 1999. Aachen University in 2012. He is now a senior researcher at the Institute for
[6] G. Böcherer, F. Steiner, and P. Schulte, “Bandwidth efficient and Communications Engineering, Technische Universität München. His current
rate-matched low-density parity-check coded modulation,” IEEE Trans. research interests are coding, modulation, and probabilistic shaping for optical,
Commun., vol. 63, no. 12, Dec. 2015. wireless, and wired communications.

Authorized licensed use limited to: NUST School of Electrical Engineering and Computer Science (SEECS). Downloaded on October 01,2023 at 15:52:20 UTC from IEEE Xplore. Restrictions apply.

You might also like