Attack
Attack
Introduction
First presented by Robert J. McEliece in 1978 [1], the McEliece cryptosystem represents
one of the most famous examples of error correcting codes-based public key cryptosys-
1 The author is with the Department of Biomedical Engineering, Electronics and Telecommunications,
Alice, in order to send encrypted messages to Bob, fetches his public key G′ from
the public directory, divides her message into k-bit words, and applies the encryption
map as follows:
x = u · G′ + e, (2)
public directory
G' = S -1 G P -1
G' e P S
u x unsecure x u
Alice G Bob
channel
Goppa intentional permutation Goppa descrambling
encoder errors decoder
x′ = x · P = u · S−1 · G + e · P. (3)
By exploiting Goppa decoding, Bob is able to correct all the t intentional errors. Hence
he can obtain u · S−1 , due to the systematic form of G, and then recover u through
multiplication by S. The main blocks of the McEliece cryptosystem are shown in Figure
1.
In his original formulation, McEliece adopted Goppa codes with length n = 1024
and dimension k = 524, able to correct up to t = 50 errors. The key size is hence
n×k = 67072 bytes, and the transmission rate is k/n ≈ 0.5. On the other hand, the RSA
system with 1024-bit modulus and public exponent 17 has keys of just 256 bytes and
reaches unitary transmission rate (i.e., encryption has no overhead on the transmission).
However, it must be considered that the McEliece cryptosystem is significantly faster
than RSA: it requires 514 binary operations per bit for encoding and 5140 for decoding.
On the contrary, RSA requires 2402 and 738112 binary operations per bit for encoding
and decoding, respectively [5].
In this section a recent version of the McEliece cryptosystem based on QC-LDPC codes
is described. It exploits the peculiarities of QC-LDPC codes for overcoming the draw-
backs of the original system and it is able to resist all attacks currently known.
First, some basic properties of QC-LDPC codes are reminded, then it is shown how
the McEliece cryptosystem should be modified in order to use these codes as private and
public keys without incurring in security issues.
2.1. QC-LDPC codes based on difference families
LDPC codes represent a particular class of linear block codes, able to approach chan-
nel capacity when soft decision decoding algorithms based on the belief propagation
principle are adopted [14].
An (n, k) LDPC code C is defined as the kernel of a sparse (n − k) × n parity-check
matrix H:
C = c ∈ GF (2)n : H · cT = 0 .
(4)
In order to achieve very good performance under belief propagation decoding, the parity-
check matrix H must have a low density of 1 symbols (typically on the order of 10−3 )
and absence of short cycles in the associated Tanner graph. The shortest possible cycles,
that have length four, are avoided when any pair of rows (columns) has supports with no
more than one overlapping position.
These conditions suffice to obtain good LDPC codes; so they can be designed
through algorithms that work directly on the parity-check matrix, aiming at maximizing
the cycles length, like the Progressive Edge Growth (PEG) algorithm [18]. The codes
obtained are unstructured, in the sense that the positions of 1 symbols in each row (or
column) of the parity-check matrix are independent of the others. This feature influences
complexity of the encoding and decoding stages, since the whole matrix must be stored
and the codec implementation cannot take advantage of any cyclic or polynomial na-
ture of the code. In this case, a common solution consists in adopting lower triangu-
lar or quasi-lower triangular parity-check matrices, that correspond to sparse generator
matrices, in such a way as to reduce complexity of the encoding stage [19].
Opposite to this approach, structured LDPC codes have also been proposed, whose
parity-check matrices have a very simple inner structure. Among them, QC-LDPC codes
represent a very important class, able to join easy encoding of QC codes with the aston-
ishing performance of LDPC codes. For this reason, QC-LDPC codes have been included
in several recent telecommunication standards and applications [20,21].
QC-LDPC codes have both length and dimension multiple of an integer p, that is,
n = n0 p and k = k0 p. They have the property that each cyclic shift of a codeword by
n0 positions is still a valid codeword. This reflects on their parity-check matrices, that
are formed by circulant blocks. A p × p circulant matrix A over GF (2) is defined as
follows:
a0 a1 a2 · · · ap−1
ap−1 a0 a1 · · · ap−2
A = ap−2 ap−1 a0 · · · ap−3 , (5)
.. .. .. . . ..
. . . . .
a1 a2 a3 · · · a0
where ai ∈ GF (2), i = 0 . . . p − 1.
A simple isomorphism exists between the algebra of p × p binary circulant matrices
and the ring of polynomials GF (2)[x]/(xp +1). If we denote by X the unitary cyclic per-
mutation
Pp−1 matrix, the isomorphismP maps X into the monomial x and the circulant matrix
i p−1 i p
i=0 α i X into the polynomial i=0 αi x ∈ GF (2)[x]/(x + 1). This isomorphism
can be easily extended to matrices formed by circulant blocks.
Let us focus attention on a particular family of QC-LDPC codes, having the parity-
check matrix formed by a single row of n0 circulant blocks, each with row (column)
weight dv :
If we suppose (without loss of generality) that Hn0 −1 is non singular, a valid generator
matrix for the code in systematic form can be expressed as follows:
T
H−1n0 −1 · H0
T
H−1n0 −1 · H1
G = I
..
,
(7)
.
T
H−1
n0 −1 · Hn0 −2
The adoption of QC-LDPC codes in the McEliece cryptosystem can yield important
advantages in terms of key size and transmission rate. As any other family of linear
block codes, QC-LDPC codes are exposed to the same attacks targeted to the original
cryptosystem; among them, decoding attacks represent the most dangerous ones (as it
will be shown in Section 3.3).
Moreover, the adoption of LDPC codes could expose the system to new attacks,
due to the sparse nature of their matrices. It was already observed in [12] that LDPC
0
10
-1
10
-2
10
-3
10
n = 16384 n = 24576 n = 49152
-4
k = 12288 k = 16384 k = 32768
10
-5
10
-6
10
BER
FER
t
-7
10
0 100 200 300 400 500 600 700 800 900 1000 1100 1200 1300 1400 1500
matrices cannot be used for obtaining the public key, not even after applying a linear
transformation through a sparse matrix. In this case, the secret LDPC matrix could be
recovered through density reduction attacks, that aim at finding the rows of the secret
matrix by exploiting their low density [12,25].
One could think to replace LDPC matrices with their corresponding generator ma-
trices that, in general, are dense. Actually, this is what happens in the original McEliece
cryptosystem, where a systematic generator matrix for the secret Goppa code is used,
hidden through a permutation. However, a permutationally equivalent code of an LDPC
code is still an LDPC code, and the rows of its LDPC matrix could be found by searching
for low weight codewords in the dual of the secret code. We call this strategy attack to
the dual code: it aims at finding a sparse representation for the parity-check matrix of the
public code, that can be used for effective LDPC decoding.
So, when adopting LDPC codes in the McEliece cryptosystem, it does not suffice to
hide the secret code through a permutation, but it must be ensured that the public code
does not admit sparse characteristic matrices. For this reason, it has been proposed to
replace the permutation matrix P with a different transformation matrix, Q [13]. Q is a
sparse n × n matrix, with rows and columns having Hamming weight m > 1. This way,
the LDPC matrix of the secret code (H) is mapped into a new parity-check matrix that is
valid for the public code:
H′ = H · QT . (8)
Depending on the value of m, the density of H′ could be rendered high enough to avoid
attacks to the dual code.
Table 1. Choices of the parameters for the QC-LDPC-based McEliece cryptosystem.
System n0 dv p m t′ Key size (bytes)
1 4 13 4096 7 27 6144
2 3 13 8192 11 40 6144
3 3 15 16384 13 60 12288
In the modified cryptosystem, Bob chooses a secret LDPC code by fixing its parity-
check matrix, H, and selects two other secret matrices: a k × k non singular scrambling
matrix S and an n × n non singular transformation matrix Q with row/column weight
m. Then, Bob obtains a systematic generator matrix G for the secret code and produces
his public key as follows:
It should be noted that the public key is a dense matrix, so the sparse character of LDPC
codes does not help reducing the key length. However, when adopting QC-LDPC codes,
the characteristic matrices are formed by circulant blocks that are completely described
by a single row or column. This fact significantly reduces the key length that, moreover,
increases linearly with the code length.
The encryption map is the same as in the original cryptosystem: G′ is used for
encoding and a vector e of intentional errors is added to the encoded word. The Hamming
weight of vector e, in this case, is denoted as t′ . The decryption map must be slightly
modified with respect to the original cryptosystem. After having received a ciphertext,
Bob must invert the transformation as follows:
x′ = x · Q = u · S−1 · G + e · Q, (10)
thus obtaining a codeword of the secret LDPC code affected by the error vector e · Q
with weight ≤ t = t′ m. After that, Bob must be able to correct all the errors through
LDPC decoding and obtain u · S−1 , due to the systematic form of G. Finally, he can
recover u through multiplication by S.
It should be noted that the introduction of the transformation matrix Q in place
of the permutation matrix causes an error amplification effect (by a factor m). This is
compensated by the error correction capability of the secret LDPC code, that must be
able to correct t errors.
Based on this scheme, two possible choices of the system parameters have been
recently proposed, that are able to ensure different levels of security against currently
known attacks [26]. A third choice is here considered that demonstrates how the cryp-
tosystem scales favorably when larger keys are needed for facing efficient implementa-
tions of the attacks, as the one proposed recently. For the three codes considered (whose
performance is reported in Figure 2), t = 189, 440 and 780 has been assumed, respec-
tively, and m and t′ have been fixed accordingly. The considered values of the parameters
are summarized in Table 1. It should be noted that the key size is simply k0 n0 p, since the
whole matrix can be described by storing only the first row (or column) of each circulant
block.
Table 2. Work factors of attacks to the dual code.
System n0 dv p m Max WF w(WF ≥ 280 )
1 4 13 4096 7 2153 179
2 3 13 8192 11 2250 127
3 3 15 16384 13 2340 124
For the sake of conciseness, this section considers only the attacks that are able to achieve
the lowest work factors for the considered cryptosystem, together with their possible
countermeasures.
This kind of attacks exploits the fact that the dual of the public code, that is generated by
H′ , may contain low weight codewords, and such codewords can be searched through
probabilistic algorithms. Each row of H′ is a valid codeword of the dual code, so it has
at least Aw ≥ (n − k) codewords with weight w ≤ dc m, where dc = n0 dv is the row
weight of H.
It should be observed that dc ≪ n and the supports of sparse vectors have very
small (or null) intersection. So, by introducing an approximation, we can consider Aw ≈
(n − k). With similar arguments, and assuming a small m, we can say that the rows of
H′ have weight w ≈ dc m = n0 dv m.
One of the most famous probabilistic algorithms for finding low weight codewords is
due to Stern [4] and exploits an iterative procedure. When Stern’s algorithm is performed
on a code having length nS and dimension kS , the probability of finding, in one iteration,
one of Aw codewords with weight w is [27]:
w
nS −w
w−g
nS −kS /2−w+g nS −kS −w+2g
g kS /2−g g kS /2−g l
Pw,Aw ≤ Aw · nS
· nS −kS /2
· nS −kS
, (11)
kS /2 kS /2 l
where g and l are two parameters whose values must be optimized as functions of the
total number of binary operations. So, the average number of iterations needed to find a
−1
low weight codeword is c ≥ Pw,A w
. Each iteration requires:
kS /2 2
(nS − kS )3 2g(nS − kS )
2 kS /2 g
N= + kS (nS − kS ) + 2gl + (12)
2 g 2l
In the cryptosystem version proposed in [13], both S and Q were chosen sparse, with
non-null blocks having row/column weight m, and
Q0 0 0 0
0 Q1 0 0
Q= . (13)
0 0 .. 0.
0 0 0 Qn0 −1
This gave raise to an attack formulated by Otmani, Tillich and Dallot, that is here denoted
as OTD attack [16].
The rationale of this attack lies in the observation that, by selecting the first k
columns of G′ , an eavesdropper can obtain
−1
Q0 0 ... 0
0 Q−1 1 ... 0
G′≤k = S−1 · . . (14)
.. .. ..
.. . . .
0 0 . . . Q−1
n0 −2
Then, by inverting G′≤k and considering its block at position (i, j), he can obtain Qi Si,j ,
that corresponds to the polynomial
If both Qi and Si,j are sparse, it is highly probable that gi,j (x) has exactly m2 non-null
coefficients and that its support contains at least one shift xla · qi (x), 0 ≤ la ≤ p − 1.
Three possible strategies have been proposed for implementing this attack. Accord-
ing to the first strategy, the attacker can enumerate all the m-tuples belonging to the sup-
port of gi,j (x). Each m-tuple can be then validated through inversion of its correspond-
ing polynomial and multiplication by gi,j (x). If the resulting polynomial has exactly m
non-null coefficients, the m-tuple is a shifted version of qi (x) with very high probability.
The second strategy exploits the fact that it is highly probable that the Hadamard prod-
d
uct of the polynomial gi,j (x) with a d-shifted version of itself, gi,j (x) ∗ gi,j (x), gives a
shifted version of qi (x), for a specific value of d. The eavesdropper can hence calculate
d
all the possible gi,j (x) ∗ gi,j (x) and check whether the resulting polynomial has m non
null coefficients. As a third strategy, the attacker can consider the i-th row of the inverse
of G′≤k :
As stated in the Introduction, the most promising attacks against the McEliece cryptosys-
tem are those aiming at solving the general decoding problem, that is to obtain the error
vector e used for encrypting a ciphertext.
It can be easily shown that e can be searched as the lowest weight codeword in the
extended code generated by
′
G
G′′ = . (19)
x
In order to evaluate the work factor of such attacks, we refer to Stern’s algorithm,
whose complexity can be easily evaluated in closed form, as already shown in Section
3.1. Stern’s algorithm has been further improved in [5] and, very recently, in [6]. Esti-
mating the work factor of such modified algorithms is more involved, and requires mod-
eling the attack through Markov chains. For this reason, we continue to refer to Stern’s
original formulation. For our purposes, it seems sufficient to take into consideration that
the adoption of optimized algorithms could result in a further speedup of about 12 times,
as reported in [6]. According with the expressions reported in Section 3.1, the work fac-
tor of a decoding attack against the original McEliece cryptosystem based on Stern’s
algorithm would be 263.5 .
In the considered cryptosystem based on QC-LDPC codes, an extra speedup could
result by considering the quasi-cyclic nature of the codes. This yields that every block-
wise cyclically shifted version of the ciphertext x is still a valid ciphertext. So, an eaves-
dropper could continue extending G′′ by adding shifted versions of x, and could search
for as many shifted versions of the error vector. Figure 3 reports the values of the work
factor of decoding attacks to the considered cryptosystem as functions of the number of
rows added to G′ . The three considered choices of the system parameters reach, respec-
tively, a minimum work factor of 265.6 , 275.8 and 2106.5 binary operations.
Being the smallest work factors reached by currently known attacks, these values
can be considered as the security levels of the three cryptosystems.
115
110
105
100
95
log (WF)
90
2
85
80
75
70 System 1
System 2
65 System 3
0 200 400 600 800 1000 1200 1400 1600 1800 2000
Rows added
4. Complexity
where CSP A is the number of operations required for LDPC decoding through the sum-
product algorithm. By referring to the implementation proposed in [28], we can express
CSP A as follows:
Table 3. Parameters of the considered cryptosystems.
McEliece Niederreiter RSA QC-LDPC QC-LDPC QC-LDPC
(1024, 524) (1024, 524) 1024-bit mod. McEliece 1 McEliece 2 McEliece 3
public exp. 17
Key Size a 67072 32750 256 6144 6144 12288
Rate 0.51 0.57 1 0.75 0.67 0.67
kb 524 284 1024 12288 16384 32768
Cenc /k c 514 50 2402 658 776 1070
Cdec /k d 5140 7863 738112 4678 8901 12903
a Expressedin bytes.
b Informationblock length (bits).
c Number of binary operations per information bit for encryption.
d Number of binary operations per information bit for decryption.
where Iave is the average number of decoding iterations and q is the number of quanti-
zation bits used inside the decoder (both of them can be estimated through simulations).
By using Eq. (20) and (21), it is possible to estimate the encryption and decryption
cost in terms of binary operations per information bit. This has been done in Table 3, that
summarizes the main parameters of the considered cryptosystems and compares them
with those of more consolidated solutions (for the first three systems the complexity
estimates are reported from [5]).
It can be noticed that all the three systems based on QC-LDPC codes have shorter
keys and higher rates with respect to the original McEliece cryptosystem and the Nieder-
reiter version; so, they succeed in improving their main drawbacks. In particular, the first
QC-LDPC-based system, that reaches a security level comparable with that of the origi-
nal McEliece cryptosystem, has key size reduced by more than 10 times with respect to
it and more than 5 times with respect to the Niederreiter version. Furthermore, the new
system has increased transmission rate (up to 3/4).
The security level can be increased at the expenses of the transmission rate: the
second QC-LDPC-based system has same key size as the first one, but its transmission
rate is reduced from 3/4 to 2/3. As a counterpart, its security level is increased by a
factor of about 210 .
Larger keys can be adopted in order to reach higher security levels, that are needed
for facing efficient decoding attacks implemented on modern computers. The third QC-
LDPC-based system is able to reach a security level of 2106.5 by doubling the key size
(that is still more than 5 times smaller than in the original cryptosystem). It should be
noted that the system scales favorably when larger keys are needed, since the key size
grows linearly with the code length, due to the quasi-cyclic nature of the codes, while in
the original system it grows quadratically.
As concerns complexity, it can be observed that the first QC-LDPC-based cryptosys-
tem has encryption and decryption costs comparable with those of the original McEliece
cryptosystem. The Niederreiter version is instead able to significantly reduce the encryp-
tion cost. Encryption and decryption complexity increases for the other two QC-LDPC-
based variants, but it still remains considerably lower with respect to RSA. On the other
hand, RSA has the smallest keys and reaches unitary rate.
5. Conclusion
It has been shown that the adoption of LDPC codes in the framework of the McEliece
cryptosystem can help overcoming its drawbacks, that are large keys and low transmis-
sion rate. However, such choice must be considered carefully, since the sparse nature of
the characteristic matrices of LDPC codes can expose the system to classic as well as
newly developed attacks. In particular, the misuse of sparse transformation matrices can
expose the system to total break attacks, able to recover the secret key with reasonable
complexity.
The adoption of dense transformation matrices permits to avoid such attacks, and
the quasi-cyclic nature of the codes still allows to reduce the key size. Furthermore, the
McEliece cryptosystem based on QC-LDPC codes can exploit efficient algorithms for
polynomial multiplication over finite fields for encryption and low complexity LDPC
decoding algorithms for decryption, that reduce its computational complexity.
For these reasons, it seems that the considered variants of the McEliece cryptosystem
can be seen as a trade-off between its original version and other widespread solutions,
like RSA.
Acknowledgments
The author wishes to thank Franco Chiaraluce for his contribution and Raphael Overbeck
for helpful discussion on attacks.
References
[1] R. J. McEliece. A public-key cryptosystem based on algebraic coding theory. DSN Progress Report,
pages 114–116, 1978.
[2] E. Berlekamp, R. McEliece, and H. van Tilborg. On the inherent intractability of certain coding prob-
lems. IEEE Trans. Inform. Theory, 24:384–386, May 1978.
[3] P. Lee and E. Brickell. An observation on the security of McEliece’s public-key cryptosystem. In
Advances in Cryptology - EUROCRYPT 88, pages 275–280. Springer, 1988.
[4] J. Stern. A method for finding codewords of small weight. In G. Cohen and J. Wolfmann, editors,
Coding Theory and Applications, volume 388 of Lecture Notes in Computer Science, pages 106–113.
Springer, 1989.
[5] A. Canteaut and F. Chabaud. A new algorithm for finding minimum-weight words in a linear code:
application to McEliece’s cryptosystem and to narrow-sense BCH codes of length 511. IEEE Trans.
Inform. Theory, 44:367–378, January 1998.
[6] D. J. Bernstein, T. Lange, and C. Peters. Attacking and defending the McEliece cryptosystem. In Post-
Quantum Cryptography, volume 5299 of Lecture Notes in Computer Science, pages 31–46. Springer
Berlin / Heidelberg, 2008.
[7] D. J. Bernstein. Introduction to post-quantum cryptography, chapter 1, pages 1–14. Springer, 2009.
[8] P. W. Shor. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum
computer. SIAM J. Comput., 26(5):1484–1509, 1997.
[9] H. Niederreiter. Knapsack-type cryptosystems and algebraic coding theory. Probl. Contr. and Inform.
Theory, 15:159–166, 1986.
[10] V. M. Sidelnikov. A public-key cryptosystem based on binary Reed-Muller codes. Discrete Mathematics
and Applications, 4(3), 1994.
[11] P. Gaborit. Shorter keys for code based cryptography. In Proc. Int. Workshop on Coding and Cryptog-
raphy (WCC 2005), pages 81–90, Bergen, Norway, March 2005.
[12] C. Monico, J. Rosenthal, and A. Shokrollahi. Using low density parity check codes in the McEliece
cryptosystem. In Proc. IEEE International Symposium on Information Theory (ISIT 2000), page 215,
Sorrento, Italy, June 2000.
[13] M. Baldi and F. Chiaraluce. Cryptanalysis of a new instance of McEliece cryptosystem based on QC-
LDPC codes. In Proc. IEEE International Symposium on Information Theory (ISIT 2007), pages 2591–
2595, Nice, France, June 2007.
[14] T. J. Richardson and R. L. Urbanke. The capacity of low-density parity-check codes under message-
passing decoding. IEEE Trans. Inform. Theory, 47:599–618, February 2001.
[15] M. Baldi, F. Chiaraluce, R. Garello, and F. Mininni. Quasi-cyclic low-density parity-check codes in
the McEliece cryptosystem. In Proc. IEEE International Conference on Communications (ICC 2007),
Glasgow, Scotland, June 2007. to be presented.
[16] A. Otmani, J. P. Tillich, and L. Dallot. Cryptanalysis of two McEliece cryptosystems based on quasi-
cyclic codes. In Proc. First International Conference on Symbolic Computation and Cryptography (SCC
2008), Beijing, China, April 2008.
[17] W. Diffie and M. Hellman. New directions in cryptography. IEEE Trans. Inform. Theory, 22:644–654,
November 1976.
[18] X. Y. Hu, E. Eleftheriou, and D. M. Arnold. Regular and irregular progressive edge-growth Tanner
graphs. IEEE Trans. Inform. Theory, 51:386–398, January 2005.
[19] T. J. Richardson and R. L. Urbanke. Efficient encoding of low-density parity-check codes. IEEE Trans.
Inform. Theory, 47:638–656, February 2001.
[20] 802.16e 2005. IEEE Standard for Local and Metropolitan Area Networks - Part 16: Air Interface for
Fixed and Mobile Broadband Wireless Access Systems - Amendment for Physical and Medium Access
Control Layers for Combined Fixed and Mobile Operation in Licensed Bands, December 2005.
[21] CCSDS. Low Density Parity Check Codes for use in Near-Earth and Deep Space Applications. Techni-
cal Report Orange Book, Issue 2, Consultative Committee for Space Data Systems (CCSDS), Washing-
ton, DC, USA, September 2007. CCSDS 131.1-O-2.
[22] S. J. Johnson and S. R. Weller. A family of irregular LDPC codes with low encoding complexity. IEEE
Commun. Lett., 7:79–81, February 2003.
[23] T. Xia and B. Xia. Quasi-cyclic codes from extended difference families. In Proc. IEEE Wireless
Commun. and Networking Conf., volume 2, pages 1036–1040, New Orleans, USA, March 2005.
[24] M. Baldi and F. Chiaraluce. New quasi cyclic low density parity check codes based on difference
families. In Proc. Int. Symp. Commun. Theory and Appl. (ISCTA 05), pages 244–249, Ambleside, UK,
July 2005.
[25] M. Baldi. Quasi-Cyclic Low-Density Parity-Check Codes and their Application to Cryptography. PhD
thesis, Università Politecnica delle Marche, Ancona, Italy, November 2006.
[26] M. Baldi, M. Bodrato, and F. Chiaraluce. A new analysis of the McEliece cryptosystem based on QC-
LDPC codes. In Security and Cryptography for Networks, volume 5229 of Lecture Notes in Computer
Science, pages 246–262. Springer Berlin / Heidelberg, 2008.
[27] M. Hirotomo, M. Mohri, and M. Morii. A probabilistic computation method for the weight distribution
of low-density parity-check codes. In Proc. IEEE International Symposium on Information Theory (ISIT
2005), pages 2166–2170, Adelaide, Australia, September 2005.
[28] X. Y. Hu, E. Eleftheriou, D. M. Arnold, and A. Dholakia. Efficient implementations of the sum-product
algorithm for decoding LDPC codes. In Proc. IEEE Global Telecommunications Conference (GLOBE-
COM ’01), volume 2, pages 1036–1036E, San Antonio, TX, November 2001.