Golay Codes
Golay Codes
ABSTRACT
This paper considers two kinds of algorithms. (i) If # is a binary code of length n, a
soft decision decoding algorithm for # changes an arbitrary point of R n into a nearest
codeword (nearest in Euclidean distance). (ii) Similarly a decoding algorithm for a
lattice in R n changes an arbitrary point of R n into a closest lattice point. Some general
methods are given for constructing such algorithms, and are used to obtain new and faster
decoding algorithms for the Gosset lattice E 8 , the Golay code and the Leech lattice.
_______________
* This paper appeared in IEEE Trans. Information Theory, vol. 32 (1986), 4150.
-2-
I. Introduction
Let # be an [n,k] binary code. We regard the codewords as points of n-dimensional
Euclidean space R n , and wish to find a soft decision decoder for # (also called an
analog or maximum likelihood decoder). By this we mean an algorithm which,
when presented with an arbitrary point x of R n , will find a codeword u # that
minimizes dist(x,u), the distance being Euclidean distance. Soft decision decoding has
been investigated by many distinguished information theorists over the years see [1][12], [14], [31]-[35], [40]-[45], [51], [52], [54], [58], [60], [62], [66]-[69]. However, the
majority of these papers study decoding algorithms that only perform correctly most of
the time. For example, Hacketts decoding algorithm [42] for the Golay code is only a
few tenths of a decibel different from ideal correlation detection. In the present paper
we are only interested in algorithms that always find a closest codeword or lattice point.
The decoding problem for a lattice in R n is similar. We wish to find an algorithm
which, when presented with an arbitrary point of R n , will find a lattice point u that
minimizes dist(x,u). Decoding algorithms for several classes of lattices were given in
[22], [28].
In Section II we collect together all the methods we know for constructing decoding
algorithms (of the type just mentioned, that always give the correct answer) for codes and
lattices. Most of these methods were known already, although some are new. We then
apply these methods to obtain improved decoding algorithms for the E 8 lattice
(Section 2.13), the Golay code (Section III) and the Leech lattice (Section IV).
Potential applications of these algorithms are to channel coding and vector quantizing
-3-
(see the references already mentioned, and [12], [13], [21], [27], [36], [37], [64]). It is
worth mentioning that there is already a considerable literature devoted to hard
decision or conventional binary decoding of the Golay code ([50, Chapter 16, 9], [5],
[31], [39], [70], [71]).
Notation. Two codes or lattices ! and @ are geometrically similar if one can be
obtained from the other by (possibly) a translation, rotation, reflection and change of
scale. The direct sum [50, p. 76] of two codes or lattices ! and @ is written ! @.
The componentwise product of two vectors u and v is written u * v.
in 0 , 1 notation
(1)
in + 1 , 1 notation .
(2)
then
w = u * v
The +1, 1 notation allows us to replace distance calculations with inner product
calculations. For, if x R n and u # ,
dist 2 (x,u) = (x u) . (x u)
= x . x 2x . u + u . u
= x . x 2x . u + n .
(3)
-4-
The following codes, and those geometrically similar to them, are most of the codes
we know that have a fast soft decision decoding algorithm as described in Section I.
We give rough estimates ( 1 ) of the number of arithmetic steps (additions, multiplications,
etc.) required. From (2.2) onwards, fast means that the algorithm is substantially
faster than the direct search method used in (2.1).
(2.1) Any small code may be decoded by a direct search. For an [n,k] code we
compute the inner product of the given vector x with every codeword and choose the
closest. Assuming we have precomputed a list of the codewords, this requires roughly
2n . 2 k steps, and is therefore only applicable to small codes. If the code contains the
vector ( 1 , 1 , . . . , 1) i.e. the all 1s vector in 0,1 notation the codewords
come in pairs + u and u, and the number of steps drops to n2 k .
(2.2) A first-order Reed-Muller code, with parameters [n = 2 m , k = m + 1], may
be efficiently decoded using the fast Hadamard transform (the so-called Green machine
described in [40], [41], [58], [50, Chap. 14]). This computes the 2 m + 1 inner products
x . u, u # , in about m2 m steps. It is worth pointing out that a first-order Reed-Muller
code is geometrically similar to an octahedron ( n in Coxeters notation [30]). For
example, the codewords of the code of length 4 shown in Fig. 1a when multiplied by the
orthogonal matrix of Fig. 1b become the 8 vertices
(2 , 0 , 0 , 0 ) , . . . , ( 0 , 0 , 0 ,2 )
of a 4-dimensional octahedron.
_______________
(1) The footnotes are on page 30.
-5-
are quite common. For example, the Golay code & 24 (see
(4)
in 1 notation.
-6-
x = (x 1 , . . . , x 24 ), we decode it as
u = aa . . . a bb . . . b cc . . . c,
(5)
16
i =1
i =9
x i + x i +
24
i = 17
x i .
(6)
A slightly more general family of codes can be decoded in the same way. Let T a
denote the [a, 1] repetition code of length a. Then
T a1 T a2 . . . T an ,
although in general not geometrically similar to ^ n , can be decoded by an obvious
modification of the preceding algorithm.
(2.4) The even weight code %n , with parameters [n, k = n 1], consists of all 0,1
vectors with an even number of 1s, and may be decoded in about 2n steps. To decode
x = (x 1 , . . . , x n ), we first replace each x i by sgn (x i ). If there are an even number of
minus signs we stop, but if there are an odd number we reverse the sign on an x i of
smallest magnitude. %n is geometrically similar to a hemi-cube (h n in the notation of
[30, p. 155]).
-7-
Example 1. The Golay code contains a [24,5] subcode with generator matrix
1111
1111
1111
1111
1111
1111
0000
0000
0000
0000
0000
1111
0000
0000
0000
0000
0000
1111
0000
0000
0000
0000
0000
1111
0000
0000
0000
0000 ,
0000
1111
(7)
the sextet code (cf. [15]), which is geometrically similar to %5 . Since this code plays a
key role in Section III, we give the precise decoding algorithm.
To decode
x = (x 1 , . . . , x 24 ) we first compute
sI : =
i =1
x i , s II : =
i =5
x i , . . . , s VI : =
24
i = 21
xi ,
(8)
and
u I : = sgn (s I ) , . . . , u VI : = sgn (s VI ) .
(9)
If an odd number of the u N are negative we change the sign of a u N for which s N is
minimal. Then x is decoded as
u: = u I u I u I u I u II u II u II u II . . . u VI u VI u VI u VI ,
(10)
(11a)
(11b)
Example 2.
-8-
Remark. Permutation codes [6], [7], [62] are a class of (in general) non-binary codes
that include first-order Reed-Muller codes, simplex codes, ^
t 1
j=0
( ( j) + @)
(12)
t 1
j=0
( j) * @
(13)
(compare (1), (2) above). Finding the largest inner product of x with the vectors in
( j) * @is equivalent to finding the largest inner product of ( j) * x with the vectors in
-9-
(14)
- 10 -
an arbitrary point of R n into a closest lattice point. (We can no longer work with inner
products.) The following lattices, and those geometrically similar to them, are most of
the lattices we know that have a fast decoding algorithm.
(2.9) The cubic lattice Z n , consisting of all points (u 1 , . . . , u n ) with integer
coordinates, can be decoded in about n steps. If x i is a real number we define
f (x i ) : = nearest integer to x i ,
(15)
(16)
Then the decoder for Z n simply changes x to f (x) [22, Sect. III].
(2.10) The lattice D n , consisting of all points in Z n whose coordinates add to an
even number, can be decoded in about 4n steps. For x Z n , we define g(x) to be the
same as f (x), except that the worst component of x that furthest from an integer is
rounded the wrong way. In case of a tie, the component with the lowest subscript is
rounded the wrong way. (Formal definitions of f and g may be found on page 228 of
[22].)
The decoder for D n computes f (x) and g(x), and the sum f (x 1 ) +... + f (x n ) of the
components of f (x). If the sum is even, the output is f (x), and if it is odd the output is
g(x). (See [22] for a proof and an example.)
The number of steps may be estimated as follows. For each of the n components of x
we compute (a) f (x i ) and (b) the error e i = x i f (x i ), (c) test if e i is a new worst
error [needed to determine g(x)], and (d) update
- 11 -
for c # , z Z n ,
(17)
where we regard the 0s and 1s in c as real numbers rather than elements of the Galois
field GF( 2 ). For our present purposes we wish to write the codewords in +1, 1
notation, in which case the points of (# ) consist of all vectors of the form
c + 4z,
for c # , z Z n .
(18)
[The set of points (18) strictly speaking no longer forms a lattice, but rather is a translate
of a lattice by the vector ( 1 , 1 , . . . , 1 ).]
- 12 -
The following lemma makes it possible to use a decoding algorithm for # to decode
(# ).
Lemma. Suppose x = (x 1 , . . . , x n ) lies in the cube 1 x i 1 (i = 1 , . . . , n).
Then no point of (# ) is closer to x than the closest codeword of # .
Proof. Suppose the contrary, and let u = (u 1 , . . . , u n ) be a closest lattice point to x.
By hypothesis some u i s are neither +1 nor 1. By subtracting a suitable vector 4 z, we
may change these coordinates to +1 or 1 (depending on their parity) to produce a point
of (# ) that is in # , and is at least as close to x as u is, a contradiction.
Decoding algorithm for (# )
(i)
(ii)
Let S denote the set of i for which 1 < x i < 3. For i S, replace x i by
2 x i.
(iii)
to x, obtaining an output
c = (c 1 , . . . , c n ) say.
(iv)
- 13 -
(19)
(20)
(21)
(22)
add 4z to c .
(23)
zi
0 ( mod 2 ).) If such a modification were found, it would further speed up the
c # , z Z8 ,
(24)
c being a +1, 1 vector of length 8. [As mentioned above, with these coordinates E 8 is
- 14 -
not a lattice but a translate of a lattice by the vector ( 1 , 1 , . . . , 1 ).] # can be decoded in
about 3 . 8 + 8 = 32 steps by a fast Hadamard transform (see (2.2)). Therefore the
algorithm given in (2.12) will decode this version of E 8 in roughly 72 steps. This is
faster than the algorithm proposed in [22] (see (2.15) below) which requires about 104
steps.
If we require not only the closest point u E 8 to x, but also dist 2 (x,u), this can be
obtained at the end of step (iii) of the algorithm, using Eq. (3), at the cost of about 16
additional steps to compute x . x. If steps (i) and (ii) can be carried out in advance, as
will be the case when we use this algorithm in decoding the Leech lattice in Section IV,
the number of steps to decode one x drops to about 48, or 56 if dist 2 (x,u) is needed.
(2.14) Direct sums of lattices on this list can be handled in the same way as direct
sums of codes see (2.6).
(2.15) Superlattices. If M is one of the above lattices and contains M as a
sublattice of small index, then can also be decoded easily. We write
=
t 1
j=0
( ( j) + M ) ,
(25)
- 15 -
y ( j) : = x ( j) ,
(26)
z ( j) : = (y ( j) ) ,
(27)
d j : = (z ( j) y ( j) ) . (z ( j) y ( j) ) ,
(28)
for j = 0 , . . . , t 1, and find a j, j * say, for which d j is minimized. Then the decoder
output is
u : = z( j
+ ( j
(29)
(30)
- 16 -
- 17 -
generator matrix.
1100
1010
1010
1001
0111
0000
0000
1100
1010
1001
1100
1000
0000
0000
1100
1010
1100
1010
1000
1100
1010
1100
1010
0000
0000
1000
1100
1010
0000
0000
1100
1010
1000
1100
1010
0000
0000
0000
0000
1000
1100
1010
(31)
- 18 -
++
++
++++
14
++
++++
- 19 -
Main stage
Set record = 0 and j * = 0.
For j = 0 through 127,
obtain jI , . . . , jVI from Table II,
obtain G I ( jI ) , . . . , G VI ( jVI ) from the Gray code tables, and
compute the inner product
ip = G I ( jI ) +... + G VI ( jVI )
(32a)
(32b)
otherwise, and
if ip > record , set record = ip and j * = j .
(33)
where
*
u N = sgn (G N ( jN ) ) , N = I , . . . , VI ,
*
with the sign of the smallest G N ( jN ) reversed if an even number of the u N are
negative. Record gives the inner product u . x.
____________
- 20 -
0000
1100
1010
1001
1100
1010
1000
0000
0000
1111
1100
1010
1100
1010
1001
1000
1100
1010
0000
1100
1010
0000
0000
0000
1000
1100
1010
0000
0000
0000
1100
1010
1001
1000
1100
1010
0000
0000
0000
0000
0000
0000
1000
1100
1010
(34)
(4) and (34) together generate & 24 . The precomputation stage computes three Gray code
tables, the first containing all 128 combinations
x 1 . . . x 7 x 8 ,
the second all 64 combinations
x 9 . . . x 15 + x 16 ,
and the third all 64 combinations
- 21 -
x 17 . . . x 23 + x 24 ,
with an even number of minus signs in each case.
In the main stage j now runs from 0 to 511, and (32a) and (32b) are replaced by the
much simpler formula
ip = G I ( jI ) + G II ( jII ) + G III ( jIII ) ,
(35)
(36)
where u N = sgn (G N ( j ) ) , N = I , II , III (cf. (5)). The total number of steps is roughly
192 for the precomputation stage, plus 3 . 512 = 1536 for the main stage, a total of
1728.
(3.3) The Golay code of length 23 can be decoded by a straightforward modification
of either algorithm (or alternatively by using the algorithms as they stand and the method
of (2.8)).
- 22 -
4 . 24 . 8192 = 786432
steps to decode one point. In contrast the algorithm given below takes only about 55968
steps, which is about 14 times as fast. This algorithm is based on the Turyn
construction of 24 .
(4.1) The Turyn construction of the Leech lattice. R. J. Turyn showed around
1965 that the Golay code may be constructed by gluing together three copies of the [8,4]
first-order Reed-Muller code (see [50, Chap. 18]). The Leech lattice may be constructed
in a similar manner by gluing together three copies of the E 8 lattice. Although this
construction has been known for many years, the following particularly simple version
has not appeared in print before. We give two sets of coordinates, the first being more
elegant, while the second is easier to decode.
Let 8 denote the particular version of E 8 obtained by multiplying the vectors in
(30) by 4. Typical vectors in 8 are ( 0 8 ), (4 , 4 , 0 6 ), and (2 8 ) with an even
number of minus signs.
Definition 1. The Leech lattice 24 consists of the vectors
(e 1 + a + t, e 2 + b + t, e 3 + c + t) ,
(37)
where
e 1 ,e 2 ,e 3 are arbitrary vectors of 8 ,
a,b are arbitrary vectors from the list of 16 given in Table IVa,
c is the unique vector in Table IVa satisfying
a + b + c 0 ( mod 8 ) ,
(38)
- 23 -
To see that (37) does define the Leech lattice, we begin with the standard Miracle Octad
Generator (or MOG) construction of 24 (see [16] or [25] for example), in which the 24
coordinates are divided into 3 sets of 8. The intersection of 24 with any one of these 8dimensional spaces is our 8 , and the projection onto the same space is 12 8 . The
quotient 12 8 / 8 is an abelian group of order 256, and the vectors a + t, a Table IVa,
t Table IVt, are coset representatives for 8 in 12 8 . (The blocks of four coordinates in
Tables IVa and IVt represent columns of the MOG. See also Fig. 27 of [16].) The
quotient 24 /( 8 8 8 ) is an abelian group of order 4096, and the vectors
(a + t, b + t, c + t) ,
(39)
_1_
2
1
1
0
1
1
0
0
0
1
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
0
0
1
1
0
0
0
0
1
1
0
0
1
1
(40)
- 24 -
0000 )
2200 )
0022 )
2020 )
0202 )
2002 )
0220 )
(41)
Also let
(42)
(43)
where
e 1 ,e 2 ,e 3 are arbitrary vectors of 8 ,
a,b are arbitrary vectors from Table Va,
c is the unique vector in Table Va satisfying
a + b + c 0 ( mod 8 ) ,
(44)
- 25 -
(45)
with a,b,c Table Va, t Table Vt, and satisfying (44), we write down a triple
j = ( j1 , j2 , j3 ) , 0 j 4095 ,
indicating that a + t is entry j1 of Table VI, b + t is entry j2 , and c + t is entry j3 . In
other words
(
( j1 )
( j2 )
( j3 )
is a triple (45) that satisfies (44). Part of the cross-reference table based on Table VI is
shown in Table VII.
- 26 -
Precomputation stage
In a moment we shall apply the E 8 decoder of (2.13) to the 256 vectors
(x 1 , . . . , x 8 ) + a + t, a Table Va , t Table Vt .
Before doing this we carry out steps (19) to (21) of the decoder in advance. The
components of the vectors a + t range from 2 to +4. So our first step is to
compute the 7 . 24 = 168 numbers
y im = x i + m
( 2 m 4) ,
(46)
(x 9 , . . . , x 16 ) + ( j) ,
(47)
(x 17 , . . . , x 24 ) + ( j) ,
(48)
and apply the E 8 decoder of (2.13) to these three vectors (making use of the fact that we
have already carried out steps (19)-(21) of that algorithm). Let the closest points of 8
to (46)-(48) be
p( j, 1 ) , p( j, 2 ) , p( j, 3 )
respectively, and let
- 27 -
(49)
Main stage
Set record = 0 and j * = 0.
For j = 0 through 4095,
obtain j1 , j2 , j3 from Table VII, and calculate the squared distance
d = d( j1 , 1 ) + d( j2 , 2 ) + d( j3 , 3 )
(50)
- 28 -
Acknowledgements
We thank A. R. Calderbank, A. M. Odlyzko, M. R. Schroeder and A. D. Wyner for
some helpful discussions.
- 29 -
List of Footnotes
(1) Page 4. As the automobile advertisements say, use these figures for comparison only.
The actual running time will depend on the relative speeds of addition and multiplication,
etc., and will probably be greater than the figures given here. We have tried, however, to
evaluate all the algorithms in a uniform manner.
- 30 -
Figure Captions
Figure 1. (a) First-order Reed-Muller code of length 4. (b) An orthogonal matrix.
Figure 2. A code formed by gluing subcodes @1 , @2 , @3 together.
- 31 -
- 32 -
TABLE I
________________________________________________
________________________________________________
I
II III IV V VI
(2) _ _ _ _ _ _ _ _
__ __ __ __
( 4 ) _ _ _ _ __
__
- 33 -
TABLE II
_________________________________
j I
II
III IV V VI
_________________________________
0 0
0
0
0
0
0
1 2
2
2
2
0
0
6
6
6
0
0
2 6
3 4
4
4
4
0
0
4 6
14
2
0
2
0
...
_________________________________
- 34 -
TABLE III
___________________________
0 x1 + x2 + x3 + x4
1 x1 + x2 + x3 + x4
2 x1 x2 + x3 + x4
3 x1 x2 + x3 + x4
4 x1 x2 x3 + x4
5 x1 x2 x3 + x4
6 x1 + x2 x3 + x4
7 x1 + x2 x3 + x4
8 x1 + x2 x3 x4
9 x1 + x2 x3 x4
10 x 1 x 2 x 3 x 4
11 x 1 x 2 x 3 x 4
12 x 1 x 2 + x 3 x 4
13 x 1 x 2 + x 3 x 4
14 x 1 + x 2 + x 3 x 4
15 x 1 + x 2 + x 3 x 4
___________________________
- 35 -
TABLE IV
_______________________
(a)
(t)
_______________________
_ _
2020 0202 3111 1111
_
__ _
2
020
0202
3111 1111
2002 2002 3111 11__1_1
_
_ _ _
2002 2002 3111 1111
_ _ _
2002 0220 3111 1111
_ __
_
- 36 -
TABLE V
_______________________
(a)
(t)
_______________________
2020
0000
1111 2000
2002 0000 2000 1111
_
2000 0200 2011 0011
_
_ _
2000
0002
2011 1100
1111 1111 1210 1010
__
_
1111 1111 2101 0101
_ _
_
1111 1111 1010 2101
__
_ _
1
111
1111
2101
1010
_
_
1
111
1111 1021 1001
_
_
_
1111 1111 2110 0110
_
_
_
1111 1111 1001 2110
_
_ _ _
1111 1111 2110 1001
_______________________
- 37 -
TABLE VI
___________________
(0)
0000 0000
( 1 ) 2200 0000
( 2 ) 2020 0000
...
( 8 ) 1111 1111
...
_
( 16 )
1111 1111
_
( 17 ) 3311 1111
...
___________________
- 38 -
TABLE VII
__________________
0
= (0,0,0)
1 = ( 1 , 1 , 0 )
2 = ( 1 , 0 , 1 )
3
= (0,1,1)
4 = ( 16 , 16 , 16 )
5 = ( 17 , 17 , 16 )
...
__________________
- 39 -
References
- 40 -
9.
10.
I. F. Blake, The Leech lattice as a code for the Gaussian channel, Information and
Control, 19 (1971), 66-74.
11.
12.
13.
R. de Buda, The upper error bound of a new near-optimal code, IEEE Trans.
Information Theory, IT-21 (1975), 441-445.
14. D. Chase, A class of algorithms for decoding block codes with channel
measurement information, IEEE Trans. Information Theory, IT-18 (1972), 170182.
15.
16.
J. H. Conway, The Golay codes and the Mathieu groups, Chapter 12 of [29].
17.
- 41 -
18.
19.
20.
21.
22.
23.
24.
J. H. Conway and N. J. A. Sloane, Lorentzian forms for the Leech lattice, Bull.
Amer. Math. Soc., 6 (1982), 215-217.
25.
26.
- 42 -
27.
J. H. Conway and N. J. A. Sloane, A fast encoding method for lattice codes and
quantizers, IEEE Trans. Information Theory, IT-29 (1983), 820-824.
28.
29.
J. H. Conway and N. J. A. Sloane, The Leech lattice, Sphere Packings, and Related
Topics, Springer-Verlag, N.Y., in preparation.
30.
31.
32.
B. G. Dorsch, A decoding algorithm for binary block codes and j-ary output
channels, IEEE Trans. Information Theory, IT-20 (1974), 391-394.
33. G. S. Evseev, Complexity of decoding for linear codes (in Russian), Problemy
Peradachi Informatsii, 19 (No. 1, 1983), 3-8. English translation in Problems of
Information Transmission, 19 (No. 1, 1983), 1-6.
34. G. D. Forney, Jr., Generalized minimum distance decoding, IEEE Trans.
Information Theory, IT-12 (1966), 125-131.
35. G. D. Forney, Jr., The Viterbi algorithm, Proc. IEEE, 61 (1973), 268-278.
36. A. Gersho, Asymptotically optimal block quantization, IEEE Trans. Information
Theory, IT-25 (1979), 373-380.
37. A. Gersho, On the structure of vector quantizers, IEEE Trans. Information Theory,
IT-28 (1982), 157-166.
- 43 -
38. E. N. Gilbert, Gray codes and paths on the n-cube, Bell Syst. Tech. J., 37 (1958),
815-826.
39. D. M. Gordon, Minimal permutation sets for decoding the binary Golay code,
IEEE Trans. Information Theory, IT-28 (1982), 541-543.
40.
41.
42. C. M. Hackett, An efficient algorithm for soft decision decoding of the (24,12)
extended Golay code, IEEE Trans. Comm., COM-29 (1981), 909-911 and COM30 (1982), 554.
43. C. R. P. Hartmann and L. D. Rudolph, An optimum symbol-by-symbol decoding
rule for linear codes, IEEE Trans. Information Theory, IT-22 (1976), 514-517.
44. T. Y. Hwang, Decoding linear block codes for minimizing word error rate, IEEE
Trans. Information Theory, IT-25 (1979), 733-737.
45. T. Y. Hwang, Efficient optimal decoding of linear block codes, IEEE Trans.
Information Theory, IT-26 (1980), 603-606.
46. D. E. Knuth, The Art of Computer Programming, Addison-Wesley, Reading,
Mass., Vol. 3, 1973, p. 216.
47.
- 44 -
48.
49.
50.
51.
52.
53.
54.
55.
M. Phister, Jr., Logical Design of Digital Computers, Wiley, N.Y., 1960, pp. 232234, 399-401.
56. V. Pless, The children of the (32,16) doubly even codes, IEEE Trans. Information
Theory, IT-24 (1978), 738-746.
57. V. Pless and N. J. A. Sloane, On the classification and enumeration of self-dual
codes, J. Combinatorial Theory, 18 (1975), 313-335.
- 45 -
- 46 -
70.
J. Wolfmann, Nouvelles methodes de decodage du code de Golay (24, 12, 8), Rev.
CETHEDEC Cahier, No. 2, 1981, pp. 79-88.
71.
J. Wolfmann, A permutation decoding of the (24, 12, 8) Golay code, IEEE Trans.
Information Theory, IT-29 (1983), 748-750.