In probability theory , Kolmogorov's two-series theorem is a result about the convergence of random series. It follows from Kolmogorov's inequality and is used in one proof of the strong law of large numbers .
Statement of the theorem [ ]
Let
(
X
n
)
n
=
1
∞
{\displaystyle \left(X_{n}\right)_{n=1}^{\infty }}
be independent random variables with expected values
E
[
X
n
]
=
μ
n
{\displaystyle \mathbf {E} \left[X_{n}\right]=\mu _{n}}
and variances
V
a
r
(
X
n
)
=
σ
n
2
{\displaystyle \mathbf {Var} \left(X_{n}\right)=\sigma _{n}^{2}}
, such that
∑
n
=
1
∞
μ
n
{\displaystyle \sum _{n=1}^{\infty }\mu _{n}}
converges in ℝ and
∑
n
=
1
∞
σ
n
2
{\displaystyle \sum _{n=1}^{\infty }\sigma _{n}^{2}}
converges in ℝ. Then
∑
n
=
1
∞
X
n
{\displaystyle \sum _{n=1}^{\infty }X_{n}}
converges in ℝ almost surely .
Proof [ ]
Assume WLOG
μ
n
=
0
{\displaystyle \mu _{n}=0}
. Set
S
N
=
∑
n
=
1
N
X
n
{\displaystyle S_{N}=\sum _{n=1}^{N}X_{n}}
, and we will see that
lim sup
N
S
N
−
lim inf
N
S
N
=
0
{\displaystyle \limsup _{N}S_{N}-\liminf _{N}S_{N}=0}
with probability 1.
For every
m
∈
N
{\displaystyle m\in \mathbb {N} }
,
lim sup
N
→
∞
S
N
−
lim inf
N
→
∞
S
N
=
lim sup
N
→
∞
(
S
N
−
S
m
)
−
lim inf
N
→
∞
(
S
N
−
S
m
)
≤
2
max
k
∈
N
|
∑
i
=
1
k
X
m
+
i
|
{\displaystyle \limsup _{N\to \infty }S_{N}-\liminf _{N\to \infty }S_{N}=\limsup _{N\to \infty }\left(S_{N}-S_{m}\right)-\liminf _{N\to \infty }\left(S_{N}-S_{m}\right)\leq 2\max _{k\in \mathbb {N} }\left|\sum _{i=1}^{k}X_{m+i}\right|}
Thus, for every
m
∈
N
{\displaystyle m\in \mathbb {N} }
and
ϵ
>
0
{\displaystyle \epsilon >0}
,
P
(
lim sup
N
→
∞
(
S
N
−
S
m
)
−
lim inf
N
→
∞
(
S
N
−
S
m
)
≥
ϵ
)
≤
P
(
2
max
k
∈
N
|
∑
i
=
1
k
X
m
+
i
|
≥
ϵ
)
=
P
(
max
k
∈
N
|
∑
i
=
1
k
X
m
+
i
|
≥
ϵ
2
)
≤
lim sup
N
→
∞
4
ϵ
−
2
∑
i
=
m
+
1
m
+
N
σ
i
2
=
4
ϵ
−
2
lim
N
→
∞
∑
i
=
m
+
1
m
+
N
σ
i
2
{\displaystyle {\begin{aligned}\mathbb {P} \left(\limsup _{N\to \infty }\left(S_{N}-S_{m}\right)-\liminf _{N\to \infty }\left(S_{N}-S_{m}\right)\geq \epsilon \right)&\leq \mathbb {P} \left(2\max _{k\in \mathbb {N} }\left|\sum _{i=1}^{k}X_{m+i}\right|\geq \epsilon \ \right)\\&=\mathbb {P} \left(\max _{k\in \mathbb {N} }\left|\sum _{i=1}^{k}X_{m+i}\right|\geq {\frac {\epsilon }{2}}\ \right)\\&\leq \limsup _{N\to \infty }4\epsilon ^{-2}\sum _{i=m+1}^{m+N}\sigma _{i}^{2}\\&=4\epsilon ^{-2}\lim _{N\to \infty }\sum _{i=m+1}^{m+N}\sigma _{i}^{2}\end{aligned}}}
While the second inequality is due to Kolmogorov's inequality .
By the assumption that
∑
n
=
1
∞
σ
n
2
{\displaystyle \sum _{n=1}^{\infty }\sigma _{n}^{2}}
converges, it follows that the last term tends to 0 when
m
→
∞
{\displaystyle m\to \infty }
, for every arbitrary
ϵ
>
0
{\displaystyle \epsilon >0}
.
References [ ]
Durrett, Rick. Probability: Theory and Examples. Duxbury advanced series, Third Edition, Thomson Brooks/Cole, 2005, Section 1.8, pp. 60–69.
M. Loève, Probability theory , Princeton Univ. Press (1963) pp. Sect. 16.3
W. Feller, An introduction to probability theory and its applications , 2, Wiley (1971) pp. Sect. IX.9