Journal of Theoretical Probability, Vol. 12, No.
1, 1999
Almost-Sure Results for a Class of Dependent
Random Variables1
Magda Peligrad2,4 and Allan Gut3
Received January 30, 1997; revised August 7, 1998
The aim of this note is to establish almost-sure Marcinkiewicz-Zygmund type
results for a class of random variables indexed by Zd+—the positive d-dimen-
sional lattice points and having maximal coefficient of correlation strictly
smaller than 1. The class of applications include filters of certain Gaussian
sequences and Markov processes.
KEY WORDS: Random field; moment inequality; strong law; identically dis-
tributed random variables; maximal coefficient of correlation.
1. INTRODUCTION
Let Zd + ( d > 2 ) denote the positive integer d-dimensional lattice with
coordinate-wise partial ordering <. The notation m < n, where m =
(m1, m2,..., md) and n = (nl,n2,...,nd), thus means that m k < n k , for k =
1,2,..., d. We also use |n| for IIdk=1nk, n-> i is to be interpreted as
nk->i, for k = 1, 2,..., d and 1 = (1, 1,..., 1). For d= 1 we use the notation
Z + instead of Z 1+. In this paper we investigate a class of dependent ran-
dom variables based on an interlaced condition which uses the maximal
coefficient of correlation. The condition is defined for random fields in the
following way:
1 Partially supported by a Taft Research Grant and a NSF grant.
2 Department of Mathematical Sciences, University of Cincinnati, Cincinnati, Ohio 45221-
0025. E-mail:
[email protected].
3 Department of Mathematics, Uppsala University, Box 480, S-751 06 Uppsala, Sweden.
E-mail:
[email protected].
4 To whom correspondence should be addressed.
87
0894-9840/99/0100-0087$ 16.00/0 © 1999 Plenum Publishing Corporation
88 Peligrad and Gut
Let {Xk,keZd+} be a random field, let S C = Z d + , and define Fs =
P ( X k , k e S ) and
where the supremum is taken over all (S, T) with dist(S, T ) > n , and all
f e L 2 ( F s ) , g e L 2 ( F T ) , and where dist(S,T) 2 = inf XES,YET ||x-y|| 2 =
inf XES,YET E n k=1 (xk- y k ) 2 , i . e Euclidean distance. Various limit proper-
ties under the condition p*n->0 were studied by Bradley (4,5) and Miller.(13)
Bryc and Smolenski(8) and Peligrad(19,20) pointed out the importance of the
condition
in estimating the moments of partial sums or of maxima of partial sums.
Let us also note that, since 0< ••• <p n *<p* n-1 < ... <p1*<1, (1.1) is
equivalent to
It should be said that, according to the proof of Theorem 2 in Bradley (4)
if {Xn}neZ+ is a strictly stationary Gaussian sequence which has a bounded
spectral density f(t), i.e., 0 < m < f ( t ) < M for every t, then the sequence
defined by
for some u > 1 and measurable function g: Ru-> R has the property
Other classes of processes satisfying (1.1) are pointed in Bradley(7) and
Bryc and Smolenski.(8)
In this note we prove that, without any additional assumption,
sequences of identically distributed random variables satisfying (1.1) (or,
equivalently, (1.2)), also satisfy the strong law of large numbers. We also
establish the rate of convergence in the Marcinkiewicz-Zygmund strong
law in the form given by Baum and Katz.(1) As an intermediary result we
estimate higher moments of partial sums and of maxima of partial sums.
The paper is organized in the following way: Section 1 contains the
estimation of moments for the case d=l.In Section 2 we apply the results
Almost-Sure Results for Dependent Variables 89
of Section 1 to obtain laws of large numbers for that case. Section 3 con-
tains the extensions to random fields.
Throughout the paper we use the following notation: [x] denotes the
integer part of x, << replaces the Vinogradov symbol O, log2 means
logarithm with base two, and log + x = max{ 1, logx}.
1.1. Estimation of Moments
Let {X i } i > 1 be a sequence of random variables and set Sn = Enk=1[ Xk,
n > 1. In this section we establish the following theorem:
Theorem 1. Suppose that EXi = 0 and that E\X t \ q <i for every
i > 1 and for a certain q > 2. Finally, assume that limn ->i pn * < 1. Then
there exists a constant D(q, N, p*N), depending on q, N, and p*N, with N and
p*N defined via (1.2), such that
If, in addition, p1* < 1, then, for a certain constant D'(q, p1*), we have
The first inequality thus generalizes the upper inequality of Rosenthal,(21)
where the independent case is treated.
The proof of the theorem requires several lemmas, the first of which is
due to Bryc and Smolenski;(8) see their Lemmas 2 and 3.
Lemma 1. Assume that lim n->i pn * < 1, that EXi = 0 and E \ Xi \q < i
for every i>11 and for some q > 2. Then there is a constant C(q, N, p*N), with
N and p*N defined via (1.2), such that
Moreover, for 2 < q < 4 , the relation (1.3) holds.
Before proceeding we remark that (1.5) is an upper Marcinkiewicz-
Zygmund inequality (originally stated for sums of independent mean zero
random variables) or an upper Burkholder inequality (originally stated for
90 Peligrad and Gut
martingales); cf. Marcinkiewicz-Zygmund,(14) and Burkholder,(9) respec-
tively. Further, by identifying Qn(X) = (E n i=1 X 2 i ) l/2 as tne quadratic varia-
tion, we may rephrase the inequality as ||S n || q < C ||Q n (X}|| q .
The next result is Proposition 3.1 of Peligrad.(20)
Lemma 2. Assume that p*1 <1, and that EXi = 0 and E \X i \ q < i for
every i > 1 and some q >2. Then there exists a constant K(q, p1*), such that
Moreover, for 2 < q < 4 , there is K'(q, p* 1 ), such that, for every n> 1,
We next establish:
Lemma 3. Suppose that EXi = 0, that E \ X i \ q < i for every i>1
and some q > 2 , and let 2<k<q. Then
By the Cauchy-Schwarz inequality and Chebyshev's inequality we deduce
from (1.8)
By summing these relations from 1 to n and applying the Cauchy-Schwarz
inequality we get
and the result follows by elementary considerations. D
Almost-Sure Results for Dependent Variables 91
Proof of Theorem 1. We begin by proving (1.3) for the case N=1.
It is no restriction to assume that the constant D(q, p 1 *)> 1. As an
induction hypothesis we suppose that, for every p < q/2, we have:
By Lemma 3 in Bryc and Smolenski(8) this inequality is true for 2 < p < 4 .
Let us prove it for every p > 2 . Toward this end, let q>4. By (1.5)
whence, by the induction hypothesis with Vn = ( E n i = 1 E X 2 i ) l / 2 ,
By simple computations we obtain now the estimate
By Lemma 3 applied with k = 4 this relation gives
which establishes the relation (1.3) for the case p1* <1.
92 Peligrad and Gut
As for the general case, we first note that when N= 1 (1.1) reduces to
the condition p*1 < 1, that is, we are done. Therefore, suppose that N > 2 .
The idea is to split Sn into a sum of subsequences, so that, in each sub-
sequence, the distance between the summands equals N, and noting that
the coefficient p*1 defined for any such subsequence is dominated by the
original p*N< 1. The decomposition we shall exploit is
Now, if n< N only the last term comes into play and there is nothing more
to prove. Thus, suppose that n > N. By standard inequalities of triangle and
Minkowski type, we obtain
We also note that the last expectation is dominated by the sum of the
moments of order q. Finally, since we already know that the lemma holds
for every subsequence, and since (ar + b r )< (a + b)r, for positive reals a, b
and r > 1, the sums of the corresponding right-hand sides are dominated by
the right-hand side of (1.3), which concludes the proof of (1.3) (note that
q/2 >1).
In order to prove (1.4) we just notice that, by (1.6), tacitly assuming,
without loss of generality, that the constants in (1.3) and (1.4) are the
same,
and we have only to apply (1.3) to E \EnJ=l (X2j -EX2j)\q/2 (followed by an
application of Lemma 3) in order to obtain (1.4). D
Remark 1. The proof of Theorem 1 also gives information on the
size of the constant D(q, N, p*N), as a function of q. We point out the
recurrence formula
Almost-Sure Results for Dependent Variables 93
where C(q, N, p*N) is defined in (1.5), and q > 2. The constants C(q, N,p*N)
can be traced from the proof of Lemma 2 in Bryc and Smwolenski.(8) This
observation is useful when one uses Taylor expansion to estimate expo-
nential moments of Sn in terms of exponential moments of the individual
summands.
A second look at (1.4) shows that an estimate of E max1<i<n |Si| is
required there. In the strictly stationary case, Emax1<i<n S 2 i <Kn, where
K is a constant that may depend on the sequence; see Peligrad,(20)
[Prop. 3.3]. For the purpose of the present paper it will be enough to use
the following useful consequence of Theorem 1 which does not require sta-
tionarity. Note also that here we only assume that (1.1) holds, whereas for
(1.4) we assumed that p*1 <1.
Corollary 1. Suppose that EX i = 0 and that E \ X i \ q < i for every
i> 1 and some q > 2. If limn->i p n* < 1 we can find a constant k(q, N, p*N),
depending on q, N, and p*N, with N and p*N defined via (1.2), such that
Proof. Again, suppose first that p1* < 1, i.e., that N=l. By (1.4) we
only have to estimate Emax 1 < i < n |Si|. However, it follows, by (1.3) (or
(1.5)) with q = 2, that E ( E i = l m X i ) 2 < C E m i = l E X 2 i for every l < l < m < n ,
where C depends only on the pn*'s. By Billingsley,(2) [p. 102], we get, for
some constant C1; that
and the result follows by (1.4).
For the general case, we use the decomposition
and suppose, as before; that n > N (since there is nothing more to prove
otherwise). Squaring the equation, using triangle type inequalities and
taking expectations yields
94 Peligrad and Gut
which, together with (1.11) and the usual arguments, shows that
and the conclusion follows. D
2. ALMOST-SURE RESULTS
We first establish the rate of convergence for the Marcinkiewicz-
Zygmund strong law under condition (1.1).
Theorem 2. Let {Xi} i>l be identically distributed random variables,
A p > l , A> 1/2, and suppose that EXl = 0 for A<l. Assume that
limn->i pn* < 1. The following statements are equivalent:
(i) E|X1|P< i
(ii) E n=1 n pA-2 P(max 1 < j < n \Sj| >en a )<i for all e>0.
i
Proof. The proof follows the classical lines; cf. Baum and Katz.(1)
We first prove that (i)i->(ii). It is easy to see that it is no loss of
generality to assume that p> 1 and that the variables are centered at expec-
tations when A> 1/2 [see Peligrad,(17) (p. 310 and p. 312)].
Truncate at the level nA, and put X'n,t = X i I ( \ X i \ <na)-
EX i I(\X i | <n a ) for 1 <i<n, and set S'n,j = Eji=1 X'n,i for n> 1. Noting that
E X l I { \ X l \ < n A } = - E X 1 I { \ X 1 \ > n a } 'in view of the fact that EXl = 0, we
have
Almost-Sure Results for Dependent Variables 95
Next we observe that the first sum in the right-hand side is finite since
E \ X 1 | p < i, and, furthermore, since ap > 1, that nE \Xl \ I{ \X1 \ > nA} =
o(n a ) as n -> i. It therefore remains to show that
Next we use Chebyshev's inequality for a suitably large k which will be
determined later. We have
By Corollary 1 we get
In order to estimate / we set b k = P ( k < \ X l | < k + 1 ) and notice that
E \Xl |p < i iff Ekk kpbk < i. By selecting k >p and changing the order of
summation we now get:
As for // we distinguish two cases.
1. p >2, in which case
which is convergent if k is selected such that k > (pa. — 1 )/(a — 1/2),
2. 1 < p < 2 , in which case
which is convergent for k>2, provided Ap > 1.
96 Peligrad and Gut
Now we prove that (ii) implies (i). Obviously (ii) implies that
and that
Since P(max1<j<n |Xj|>>nA)= Enj=1P(|Xj|>nA,maxi<j-1 \ X i \ < n a ) , we
deduce, in view of the equidistribution, that
By centering we get
In order to estimate I we apply the Cauchy-Schwarz inequality and rela-
tion (1.3) with q = 2 and obtain
Now we return the estimate from relation (2.6) into relation (2.5) and then
into relation (2.4) and get:
Almost-Sure Results for Dependent Variables 97
whence, by (2.3), we deduce that, for every n sufficiently large, we have
Relation (2.2) finally gives
and (i) follows. D
Another result we would like to mention is the classical law of large
numbers which holds under (1.1):
Theorem 3. Let {Xi}i>l be a sequence of identically distributed
random variables with l i m n - > i p n * < 1 . Then E|X1|<i implies that
Sn/n -> EX1 almost surely as n -> i.
Proof. The proof is identical to Etemadi's proof of the strong law of
large numbers for sequences of identically distributed and pairwise inde-
pendent random variables; see Etemadi,(10) [Thm. 1], or Billingsley,(3)
[Thm. 22.1]. The only difference is the use of (1.3) ((1.5)) with q = 1
instead of the pairwise independence. D
3. RANDOM FIELDS
In this section we thus wish to extend the results to random fields.
Recall from the introduction that the random variables under considera-
tion are indexed by Zd+ —the positive integer d-dimensional lattice with
coordinate-wise partial ordering <. Toward this end, a first important
observation is that inequalities which do not depend on the (partial) order
of the index set, such as the triangle inequality, moment inequalities for
sums, and so on, remain valid "automatically." Namely, such relations only
depend on the fact that, if { X k , k e Z d + } are random variables and
{ S n , neZd+} their partial sums, i.e., SIn = Z k < n X k for neZd + , then Sn is
simply a sum of |n| random variables. However, for inequalities involving,
for example, maxk<n Skthe (partial) order of the index set is important.
With this in mind ^cf. also Gut, (12) [p. 471], and Bryc and Smolenski,(8)
[Remark 1, p. 630]), let us now examine the results from earlier sections.
Throughout this section we thus consider the random field
{ X k , k e Z d + } , and set S n = E k<n X k for neZd+.
98 Peligrad and Gut
3.1. Estimation of Moments
In this subsection we extend Theorem 1 to random fields.
Theorem 4. Suppose that EXk = 0 and that E \ X k | q < i for every
k e Z d+ and for a certain q > 2. Finally, assume that limn->i pn*<1. Then
where D(q, N, p*N, d) is a constant depending on q, N, p*N, and d, with N
and p*N defined via (1.2).
If, in addition, p*1<1, then, for a certain constant D ' ( q , p * 1 , d ) , we
have
The proof of the theorem is based on the relevant extensions of Lem-
mas 1-3 above. For the readers convenience we formulate the extensions
and point out how the earlier proofs have to be modified (if at all).
Lemma 4. Assume that limn->i p*n < 1, that EXk = 0 and E \ X k \ q
< i for every k> 1 and for some q > 2. Then there is a constant C(q, N,
p*N, d), depending on q, N, p*N, and d, with N and p*N, defined via (1.2), such
that
Moreover, for 2 < q < 4 , the relation (3.1) holds.
Lemma 5. Assume that p1* <1, and that EXk = 0 and E\Xk\q<i
for every k>l and some q>2. Then there exists a constant K(q,p 1 *,d),
such that
Almost-Sure Results for Dependent Variables 99
Moreover, for 2 < q < 4 , there is K'(q, p1*, d), such that
Lemma 6. Suppose that EXj = 0, that E\X j \ q <i for every j>1
and some q> 2, and let 2<k<q. Then
Lemma 4 is contained in Bryc and Smolenski;(8) see their Remark 1.
As for Lemma 5 the proof is similar to the proof of Proposition 3.1 of
Peligrad.(20) The first modification is that we define the subset of points Q
of our index set as Q C {k : k < n } , with Q* being the complement (thus
corresponding to Q <= {1, 2,..., n} and Q* = {1, 2,..., n} - Q in Peligrad.(20)
Once this has been done the proof follows line by the same path—except
for the obvious modification that the number of possible subsets Q now are
2|n| compared to 2" in Peligrad (20> —until we reach the estimation of
E m a x 1 < k < n | E j < k e j X j | 2 After conditioning on {X j }, one now uses the
Levy inequality for random fields derived in Gabriel.(11) [see also Paran-
jape and Park (16) ] instead of the Levy inequality for real-valued random
variables. This means that the usual constant 2 in the right-hand side is
changed to the constant 2d, that is, the constant Kq in Lemma 2 should be
replaced by the constant 2dKq (strictly speaking by 2 d - 1 K q , since already
one factor 2 is contained in Kq). The remaining two lines of the proof then
are the same as those of Peligrad.(20)
Lemma 6, finally, holds unchanged except for notation.
To prove Theorem 4 it now suffices to observe that once we have
checked the lemmas, the remainder amounts to handling sums (which is no
problem) and applying the Lemmas 5 and 6.
In order to extend Corollary 1 we must find an estimate of
Emax k < n \Sk\. Toward this end we use the following replacement for
(1.12). Namely, in view of the d-dimensional analog (3.1) of (1.3) or (1.5)
(recall Bryc and Smolenski,(18) Lemma 2 and Remark 1) with q = 2, we
know that E ( E j < k < m X k ) 2 < CEj<k<m EX2k for every 1 < j < m < n, where
100 Peligrad and Gut
C depends on the pn* and on d. We are thus in the position to apply
M6ricz,(15) [Thm. 8], according to which
where C2 is some numerical constant. The following corollary thus
emerges.
Corollary 2. Suppose that EXk = 0 and that E|X k | q <i for every
k e Z d+ and some q > 2 . If limn ->i p*n < 1 we can find a constant
K(q, N, p*N,d), depending on q, N, p*N, and d, with N and p*N defined via
(1.2), such that
3.2. Almost-Sure Results
In order to extend Theorem 2 we just have to replace indices by the
corresponding boldface, n raised to some power by |n| raised to the same
power, and so on. The following quantities and their asymptotic behavior
turn out to be of importance [see e.g., Gut(12) for details and further
references]:
and
As for the implication (ii) implies (i), it follows exactly as before that, if
(the analogy of) (ii) holds, then we must have the analogy of (2.7), that is,
we conclude that
which holds if and only if E \X1\P (log + |X 1 |) d - 1 < i; see Gut,(12)
[ Lemma 2.1 ]. The proof of this equivalence follows from partial summa-
tion and (3.7).
Almost-Sure Results for Dependent Variables 101
We are thus led to state the following generalization of Theorem 1. In
the i.i.d. case it reduces to (part of) Gut,(12) [Thm. 4.1.].
Theorem 5. Let {Xk,keZd+} be identically distributed random
variables, Ap > 1, A > 1/2, and suppose that EXl = 0 for A < 1. Assume that
limn->i P*n<1. The following statements are equivalent:
(i) E|X1|p(log+|X1|)d-1<i
(ii) E n |n| p A - 2 P (max 1 < j < n |Sj| >e |n|A) < i for all e>0.
Proof. We follow the proof of Theorem 1. In order to prove that
(i)->(ii) we truncate at the level |n|a, and set X'n,i = X i I(\X i \<, n| a )-
EX i I(\X i < |n| a) for 1 <i<n, and S'n,j = Ei<jX'n,i for n> 1. Proceeding as
in the preceding section we obtain
In this case the first sum of the right-hand side is finite since
E \X1\P (log + \ X 1 \ ) d - 1 < i (shown earlier), and, arguing as before, we
conclude that it remains to show that
In view of the modifications of the proof of Theorem 1 mentioned earlier,
together with Corollary 2, we first observe that
102 Peligrad and Gut
where, as before, k is suitably large to be determined later.
Next we establish the counterpart of (2.1). Letting bk =
P ( k < \ X 1 | < k ) we use partial summation and (3.7) to obtain
which, in view of (3.8), is convergent if k is selected such that k>
(pa.— 1)/(A— 1/2). Case 2 follows analogously. We omit the details. The
converse follows via obvious modifications of the corresponding part of the
proof of Theorem 2.
The proof of the theorem thus is complete.
We close by stating a generalization of Theorem 3. For the corre-
sponding result in the i.i.d. case, see Smythe(22) or Gut.(12) The proof
follows like in Etemadi,(10) [Thm. 2], except for trivial extensions from the
case d= 1 to the general d>2; compare also Smythe (23) Lemma 2.1 and
Gut,(12) Lemma 2.2 for related computations.
Almost-Sure Results for Dependent Variables 103
Theorem 6. Let {Xk,keZd+} be identically distributed random
variables and suppose that limn->i pn* < 1. Then E \Xl \ (log + |Xl \ ) d - l < i
implies that S n /|n| -> EXl almost-surely as n -> i.
ACKNOWLEDGMENTS
The authors would like to thank the referee for his/her careful reading
of the manuscript and for many useful suggestions which improved the
presentation of the paper.
REFERENCES
1. Baum, E., and Katz, M. (1965). Convergence rates in the law of large numbers. Trans.
Amer. Math. Soc. 120, 108-123.
2. Billingsley, P. (1968). Convergence of Probability Measures, John Wiley, New York.
3. Billingsley, P. (1995). Probability and Measure, Third Edition, John Wiley, New York.
4. Bradley, R. C. (1992). On the spectral density and asymptotic normality of weakly
dependent random fields. J. Theor. Probab. 5, 355-373.
5. Bradley R. C. (1993). Equivalent mixing conditions for random fields. Ann. Probab. 21,
1921-1926.
6. Bradley, R. C. (1994). On regularity conditions for random fields. Proc. Amer. Math. Soc.
121, 593-598.
7. Bradley, R. C. (1997). Every lower psi-mixing Markov chain is interlaced rho-mixing.
Stoch. Proc. Appl. 72, 221-239.
8. Bryc, W., and Smolenski, W. (1993). Moment conditions for almost sure convergence of
weakly correlated random variables. Proc. Amer. Math. Soc. 119, 629-635.
9. Burkholder, D. L. (1973). Distribution function inequalities for martingales. Ann. Probab.
1, 19-42.
10. Etemadi, N. (1981). An elementary proof of the strong law of large numbers. Z. Wahrsch.
verw. Geb. 55, 119-122.
11. Gabriel, J. P. (1975). Loi des grands nombres, series et martingales indexees par un
ensemble filtrant. These de doctoral, EPF, Lausanne.
12. Gut, A. (1978). Marcinkiewicz laws and convergence rates in the law of large numbers for
random variables with multidimensional indices. Ann. Probab. 6, 469-482.
13. Miller, C. (1994). Three theorems on p*-mixing random fields. J. of Theor. Probab. 7,
867-882.
14. Marcinkiewicz, J., and Zygmund, A. (1938). Quelques theoremes sur les fonctions inde-
pendantes. Studia Math. 7, 104 120.
15. Moricz, F. (1977). Moment inequalities for the maximum of partial sums of random fields.
Ada Sci. Math. 39, 353-366.
16. Paranjape, S. R., and Park, C. (1973). Laws of the iterated logarithm of multiparameter
Wiener processes. J. Multivariate Anal. 3, 132-136.
17. Peligrad M. (1985). Convergence rates of the strong law for stationary mixing sequences.
Z. Wahrsch. verw. Geb. 70, 307-314.
18. Peligrad, M. (1989). The r-quick version of the strong law for stationary L-mixing sequen-
ces. Almost Everywhere Convergence. Academic Press, Inc.
104 Peligrad and Gut
19. Peligrad, M. (1988). On the asymptotic normality of sequences of weak dependent random
variables. J. of Theor. Probab. 9, 703-715.
20. Peligrad, M. (1998). Maximum of partial sums and an invariance principle for a class of
weakly dependent random variables. Proc. Amer. Math. Soc. 126, 1181-1189.
21. Rosenthal, H.P. (1970). On the subspaces of L p ( p > 2 ) spanned by sequences of indepen-
dent random variables. Israel J. Math. 8, 273-303.
22. Smythe, R. (1973). Strong laws of large numbers for r-dimensional arrays of random
variables. Ann. Probab. 1, 164-170.
23. Smythe, R. (1974). Sums of independent random variables on partially ordered sets. Ann.
Probab. 2, 906-917.