This article
needs additional citations for verification .
Please help by adding citations to reliable sources . Unsourced material may be challenged and removed.Find sources: – · · · scholar · JSTOR (September 2014 ) (Learn how and when to remove this template message )
A compound Poisson process is a continuous-time (random) stochastic process with jumps. The jumps arrive randomly according to a Poisson process and the size of the jumps is also random, with a specified probability distribution. A compound Poisson process, parameterised by a rate
λ
>
0
{\displaystyle \lambda >0}
and jump size distribution G , is a process
{
Y
(
t
)
:
t
≥
0
}
{\displaystyle \{\,Y(t):t\geq 0\,\}}
given by
Y
(
t
)
=
∑
i
=
1
N
(
t
)
D
i
{\displaystyle Y(t)=\sum _{i=1}^{N(t)}D_{i}}
where,
{
N
(
t
)
:
t
≥
0
}
{\displaystyle \{\,N(t):t\geq 0\,\}}
is a counting of a Poisson process with rate
λ
{\displaystyle \lambda }
, and
{
D
i
:
i
≥
1
}
{\displaystyle \{\,D_{i}:i\geq 1\,\}}
are independent and identically distributed random variables, with distribution function G , which are also independent of
{
N
(
t
)
:
t
≥
0
}
.
{\displaystyle \{\,N(t):t\geq 0\,\}.\,}
When
D
i
{\displaystyle D_{i}}
are non-negative integer-valued random variables, then this compound Poisson process is known as a stuttering Poisson process which has the feature that two or more events occur in a very short time .
Properties of the compound Poisson process [ ]
The expected value of a compound Poisson process can be calculated using a result known as Wald's equation as:
E
(
Y
(
t
)
)
=
E
(
D
1
+
⋯
+
D
N
(
t
)
)
=
E
(
N
(
t
)
)
E
(
D
1
)
=
E
(
N
(
t
)
)
E
(
D
)
=
λ
t
E
(
D
)
.
{\displaystyle \operatorname {E} (Y(t))=\operatorname {E} (D_{1}+\cdots +D_{N(t)})=\operatorname {E} (N(t))\operatorname {E} (D_{1})=\operatorname {E} (N(t))\operatorname {E} (D)=\lambda t\operatorname {E} (D).}
Making similar use of the law of total variance , the variance can be calculated as:
var
(
Y
(
t
)
)
=
E
(
var
(
Y
(
t
)
∣
N
(
t
)
)
)
+
var
(
E
(
Y
(
t
)
∣
N
(
t
)
)
)
=
E
(
N
(
t
)
var
(
D
)
)
+
var
(
N
(
t
)
E
(
D
)
)
=
var
(
D
)
E
(
N
(
t
)
)
+
E
(
D
)
2
var
(
N
(
t
)
)
=
var
(
D
)
λ
t
+
E
(
D
)
2
λ
t
=
λ
t
(
var
(
D
)
+
E
(
D
)
2
)
=
λ
t
E
(
D
2
)
.
{\displaystyle {\begin{aligned}\operatorname {var} (Y(t))&=\operatorname {E} (\operatorname {var} (Y(t)\mid N(t)))+\operatorname {var} (\operatorname {E} (Y(t)\mid N(t)))\\[5pt]&=\operatorname {E} (N(t)\operatorname {var} (D))+\operatorname {var} (N(t)\operatorname {E} (D))\\[5pt]&=\operatorname {var} (D)\operatorname {E} (N(t))+\operatorname {E} (D)^{2}\operatorname {var} (N(t))\\[5pt]&=\operatorname {var} (D)\lambda t+\operatorname {E} (D)^{2}\lambda t\\[5pt]&=\lambda t(\operatorname {var} (D)+\operatorname {E} (D)^{2})\\[5pt]&=\lambda t\operatorname {E} (D^{2}).\end{aligned}}}
Lastly, using the law of total probability , the moment generating function can be given as follows:
Pr
(
Y
(
t
)
=
i
)
=
∑
n
Pr
(
Y
(
t
)
=
i
∣
N
(
t
)
=
n
)
Pr
(
N
(
t
)
=
n
)
{\displaystyle \Pr(Y(t)=i)=\sum _{n}\Pr(Y(t)=i\mid N(t)=n)\Pr(N(t)=n)}
E
(
e
s
Y
)
=
∑
i
e
s
i
Pr
(
Y
(
t
)
=
i
)
=
∑
i
e
s
i
∑
n
Pr
(
Y
(
t
)
=
i
∣
N
(
t
)
=
n
)
Pr
(
N
(
t
)
=
n
)
=
∑
n
Pr
(
N
(
t
)
=
n
)
∑
i
e
s
i
Pr
(
Y
(
t
)
=
i
∣
N
(
t
)
=
n
)
=
∑
n
Pr
(
N
(
t
)
=
n
)
∑
i
e
s
i
Pr
(
D
1
+
D
2
+
⋯
+
D
n
=
i
)
=
∑
n
Pr
(
N
(
t
)
=
n
)
M
D
(
s
)
n
=
∑
n
Pr
(
N
(
t
)
=
n
)
e
n
ln
(
M
D
(
s
)
)
=
M
N
(
t
)
(
ln
(
M
D
(
s
)
)
)
=
e
λ
t
(
M
D
(
s
)
−
1
)
.
{\displaystyle {\begin{aligned}\operatorname {E} (e^{sY})&=\sum _{i}e^{si}\Pr(Y(t)=i)\\[5pt]&=\sum _{i}e^{si}\sum _{n}\Pr(Y(t)=i\mid N(t)=n)\Pr(N(t)=n)\\[5pt]&=\sum _{n}\Pr(N(t)=n)\sum _{i}e^{si}\Pr(Y(t)=i\mid N(t)=n)\\[5pt]&=\sum _{n}\Pr(N(t)=n)\sum _{i}e^{si}\Pr(D_{1}+D_{2}+\cdots +D_{n}=i)\\[5pt]&=\sum _{n}\Pr(N(t)=n)M_{D}(s)^{n}\\[5pt]&=\sum _{n}\Pr(N(t)=n)e^{n\ln(M_{D}(s))}\\[5pt]&=M_{N(t)}(\ln(M_{D}(s)))\\[5pt]&=e^{\lambda t\left(M_{D}(s)-1\right)}.\end{aligned}}}
Exponentiation of measures [ ]
Let N , Y , and D be as above. Let μ be the probability measure according to which D is distributed, i.e.
μ
(
A
)
=
Pr
(
D
∈
A
)
.
{\displaystyle \mu (A)=\Pr(D\in A).\,}
Let δ 0 be the trivial probability distribution putting all of the mass at zero. Then the probability distribution of Y (t ) is the measure
exp
(
λ
t
(
μ
−
δ
0
)
)
{\displaystyle \exp(\lambda t(\mu -\delta _{0}))\,}
where the exponential exp(ν ) of a finite measure ν on Borel subsets of the real line is defined by
exp
(
ν
)
=
∑
n
=
0
∞
ν
∗
n
n
!
{\displaystyle \exp(\nu )=\sum _{n=0}^{\infty }{\nu ^{*n} \over n!}}
and
ν
∗
n
=
ν
∗
⋯
∗
ν
⏟
n
factors
{\displaystyle \nu ^{*n}=\underbrace {\nu *\cdots *\nu } _{n{\text{ factors}}}}
is a convolution of measures, and the series converges weakly .
See also [ ]