In mathematics, a Carleman matrix is a matrix used to convert function composition into matrix multiplication . It is often used in iteration theory to find the continuous iteration of functions which cannot be iterated by pattern recognition alone. Other uses of Carleman matrices occur in the theory of probability generating functions, and Markov chains .
Definition [ ]
The Carleman matrix of an infinitely differentiable function
f
(
x
)
{\displaystyle f(x)}
is defined as:
M
[
f
]
j
k
=
1
k
!
[
d
k
d
x
k
(
f
(
x
)
)
j
]
x
=
0
,
{\displaystyle M[f]_{jk}={\frac {1}{k!}}\left[{\frac {d^{k}}{dx^{k}}}(f(x))^{j}\right]_{x=0}~,}
so as to satisfy the (Taylor series ) equation:
(
f
(
x
)
)
j
=
∑
k
=
0
∞
M
[
f
]
j
k
x
k
.
{\displaystyle (f(x))^{j}=\sum _{k=0}^{\infty }M[f]_{jk}x^{k}.}
For instance, the computation of
f
(
x
)
{\displaystyle f(x)}
by
f
(
x
)
=
∑
k
=
0
∞
M
[
f
]
1
,
k
x
k
.
{\displaystyle f(x)=\sum _{k=0}^{\infty }M[f]_{1,k}x^{k}.~}
simply amounts to the dot-product of row 1 of
M
[
f
]
{\displaystyle M[f]}
with a column vector
[
1
,
x
,
x
2
,
x
3
,
.
.
.
]
τ
{\displaystyle \left[1,x,x^{2},x^{3},...\right]^{\tau }}
.
The entries of
M
[
f
]
{\displaystyle M[f]}
in the next row give the 2nd power of
f
(
x
)
{\displaystyle f(x)}
:
f
(
x
)
2
=
∑
k
=
0
∞
M
[
f
]
2
,
k
x
k
,
{\displaystyle f(x)^{2}=\sum _{k=0}^{\infty }M[f]_{2,k}x^{k}~,}
and also, in order to have the zeroth power of
f
(
x
)
{\displaystyle f(x)}
in
M
[
f
]
{\displaystyle M[f]}
, we adopt the row 0 containing zeros everywhere except the first position, such that
f
(
x
)
0
=
1
=
∑
k
=
0
∞
M
[
f
]
0
,
k
x
k
=
1
+
∑
k
=
1
∞
0
∗
x
k
.
{\displaystyle f(x)^{0}=1=\sum _{k=0}^{\infty }M[f]_{0,k}x^{k}=1+\sum _{k=1}^{\infty }0*x^{k}~.}
Thus, the dot product of
M
[
f
]
{\displaystyle M[f]}
with the column vector
[
1
,
x
,
x
2
,
.
.
.
]
τ
{\displaystyle \left[1,x,x^{2},...\right]^{\tau }}
yields the column vector
[
1
,
f
(
x
)
,
f
(
x
)
2
,
.
.
.
]
τ
{\displaystyle \left[1,f(x),f(x)^{2},...\right]^{\tau }}
M
[
f
]
∗
[
1
,
x
,
x
2
,
x
3
,
.
.
.
]
τ
=
[
1
,
f
(
x
)
,
(
f
(
x
)
)
2
,
(
f
(
x
)
)
3
,
.
.
.
]
τ
.
{\displaystyle M[f]*\left[1,x,x^{2},x^{3},...\right]^{\tau }=\left[1,f(x),(f(x))^{2},(f(x))^{3},...\right]^{\tau }.}
Bell matrix [ ]
The Bell matrix of a function
f
(
x
)
{\displaystyle f(x)}
is defined as
B
[
f
]
j
k
=
1
j
!
[
d
j
d
x
j
(
f
(
x
)
)
k
]
x
=
0
,
{\displaystyle B[f]_{jk}={\frac {1}{j!}}\left[{\frac {d^{j}}{dx^{j}}}(f(x))^{k}\right]_{x=0}~,}
so as to satisfy the equation
(
f
(
x
)
)
k
=
∑
j
=
0
∞
B
[
f
]
j
k
x
j
,
{\displaystyle (f(x))^{k}=\sum _{j=0}^{\infty }B[f]_{jk}x^{j}~,}
so it is the transpose of the above Carleman matrix.
Jabotinsky matrix [ ]
Eri Jabotinsky developed that concept of matrices 1947 for the purpose of representation of convolutions of polynomials. In 1963 he introduces the term "representation matrix", and generalized that concept to two-way-infinite matrices.[1] In that article only functions of the type
f
(
x
)
=
a
1
x
+
∑
k
=
2
∞
a
k
x
k
{\displaystyle f(x)=a_{1}x+\sum _{k=2}^{\infty }a_{k}x^{k}}
are discussed, but considered for positive *and* negative powers of the function. Several authors refer to the Bell matrices as "Jabotinsky matrix" since (D. Knuth 1992, W.D. Lang 2000)[full citation needed ] , and possibly this shall grow to a more canonical name.
Generalization [ ]
A generalization of the Carleman matrix of a function can be defined around any point, such as:
M
[
f
]
x
0
=
M
x
[
x
−
x
0
]
M
[
f
]
M
x
[
x
+
x
0
]
{\displaystyle M[f]_{x_{0}}=M_{x}[x-x_{0}]M[f]M_{x}[x+x_{0}]}
or
M
[
f
]
x
0
=
M
[
g
]
{\displaystyle M[f]_{x_{0}}=M[g]}
where
g
(
x
)
=
f
(
x
+
x
0
)
−
x
0
{\displaystyle g(x)=f(x+x_{0})-x_{0}}
. This allows the matrix power to be related as:
(
M
[
f
]
x
0
)
n
=
M
x
[
x
−
x
0
]
M
[
f
]
n
M
x
[
x
+
x
0
]
{\displaystyle (M[f]_{x_{0}})^{n}=M_{x}[x-x_{0}]M[f]^{n}M_{x}[x+x_{0}]}
General Series [ ]
Another way to generalize it even further is think about a general series in the following way:
Let
f
(
x
)
=
∑
n
c
n
(
f
)
⋅
ψ
n
(
x
)
{\displaystyle f(x)=\sum _{n}c_{n}(f)\cdot \psi _{n}(x)}
be a series approximation of
f
(
x
)
{\displaystyle f(x)}
, where
{
ψ
n
(
x
)
}
n
{\displaystyle \{\psi _{n}(x)\}_{n}}
is a basis of the space containing
f
(
x
)
{\displaystyle f(x)}
We can define
G
[
f
]
m
n
=
c
n
(
ψ
m
∘
f
)
{\displaystyle G[f]_{mn}=c_{n}(\psi _{m}\circ f)}
, therefore we have
ψ
m
∘
f
=
∑
n
c
n
(
ψ
m
∘
f
)
⋅
ψ
n
=
∑
n
G
[
f
]
m
n
⋅
ψ
n
{\displaystyle \psi _{m}\circ f=\sum _{n}c_{n}(\psi _{m}\circ f)\cdot \psi _{n}=\sum _{n}G[f]_{mn}\cdot \psi _{n}}
, now we can prove that
G
[
g
∘
f
]
=
G
[
g
]
⋅
G
[
f
]
{\displaystyle G[g\circ f]=G[g]\cdot G[f]}
, if we assume that
{
ψ
n
(
x
)
}
n
{\displaystyle \{\psi _{n}(x)\}_{n}}
is also a basis for
g
(
x
)
{\displaystyle g(x)}
and
g
(
f
(
x
)
)
{\displaystyle g(f(x))}
.
Let
g
(
x
)
{\displaystyle g(x)}
be such that
ψ
l
∘
g
=
∑
m
G
[
g
]
l
m
⋅
ψ
m
{\displaystyle \psi _{l}\circ g=\sum _{m}G[g]_{lm}\cdot \psi _{m}}
where
G
[
g
]
l
m
=
c
m
(
ψ
l
∘
g
)
{\displaystyle G[g]_{lm}=c_{m}(\psi _{l}\circ g)}
.
Now
∑
n
G
[
g
∘
f
]
m
n
ψ
n
=
ψ
l
∘
(
g
∘
f
)
=
(
ψ
l
∘
g
)
∘
f
=
∑
m
G
[
g
]
l
m
(
ψ
m
∘
f
)
=
∑
m
G
[
g
]
l
m
∑
n
G
[
f
]
m
n
ψ
n
=
∑
n
,
m
G
[
g
]
l
m
G
[
f
]
m
n
ψ
n
=
∑
n
(
∑
m
G
[
g
]
l
m
G
[
f
]
m
n
)
ψ
n
{\displaystyle \sum _{n}G[g\circ f]_{mn}\psi _{n}=\psi _{l}\circ (g\circ f)=(\psi _{l}\circ g)\circ f=\sum _{m}G[g]_{lm}(\psi _{m}\circ f)=\sum _{m}G[g]_{lm}\sum _{n}G[f]_{mn}\psi _{n}=\sum _{n,m}G[g]_{lm}G[f]_{mn}\psi _{n}=\sum _{n}(\sum _{m}G[g]_{lm}G[f]_{mn})\psi _{n}}
Comparing the first and the last term, and from
{
ψ
n
(
x
)
}
n
{\displaystyle \{\psi _{n}(x)\}_{n}}
being a base for
f
(
x
)
{\displaystyle f(x)}
,
g
(
x
)
{\displaystyle g(x)}
and
g
(
f
(
x
)
)
{\displaystyle g(f(x))}
it follows that
G
[
g
∘
f
]
=
∑
m
G
[
g
]
l
m
G
[
f
]
m
n
=
G
[
g
]
⋅
G
[
f
]
{\displaystyle G[g\circ f]=\sum _{m}G[g]_{lm}G[f]_{mn}=G[g]\cdot G[f]}
Examples [ ]
If we set
ψ
n
(
x
)
=
x
n
{\displaystyle \psi _{n}(x)=x^{n}}
we have the Carleman matrix
If
{
e
n
(
x
)
}
n
{\displaystyle \{e_{n}(x)\}_{n}}
is an ortonormal basis for a Hilbert Space with a defined inner product
⟨
f
,
g
⟩
{\displaystyle \langle f,g\rangle }
, we can set
ψ
n
=
e
n
{\displaystyle \psi _{n}=e_{n}}
and
c
n
(
f
)
{\displaystyle c_{n}(f)}
will be
⟨
f
,
e
n
⟩
{\displaystyle {\displaystyle \langle f,e_{n}\rangle }}
. If
e
n
(
x
)
=
e
−
1
n
x
{\displaystyle e_{n}(x)=e^{{\sqrt {-1}}nx}}
we have the analogous for Fourier Series, namely
c
n
(
f
)
=
1
2
π
∫
−
π
π
f
(
x
)
⋅
e
−
−
1
n
x
d
x
{\displaystyle c_{n}(f)={\cfrac {1}{2\pi }}\int _{-\pi }^{\pi }\displaystyle f(x)\cdot e^{-{\sqrt {-1}}nx}dx}
Matrix properties [ ]
These matrices satisfy the fundamental relationships:
M
[
f
∘
g
]
=
M
[
f
]
M
[
g
]
,
{\displaystyle M[f\circ g]=M[f]M[g]~,}
B
[
f
∘
g
]
=
B
[
g
]
B
[
f
]
,
{\displaystyle B[f\circ g]=B[g]B[f]~,}
which makes the Carleman matrix M a (direct) representation of
f
(
x
)
{\displaystyle f(x)}
, and the Bell matrix B an anti-representation of
f
(
x
)
{\displaystyle f(x)}
. Here the term
f
∘
g
{\displaystyle f\circ g}
denotes the composition of functions
f
(
g
(
x
)
)
{\displaystyle f(g(x))}
.
Other properties include:
M
[
f
n
]
=
M
[
f
]
n
{\displaystyle \,M[f^{n}]=M[f]^{n}}
, where
f
n
{\displaystyle \,f^{n}}
is an iterated function and
M
[
f
−
1
]
=
M
[
f
]
−
1
{\displaystyle \,M[f^{-1}]=M[f]^{-1}}
, where
f
−
1
{\displaystyle \,f^{-1}}
is the inverse function (if the Carleman matrix is invertible ).
Examples [ ]
The Carleman matrix of a constant is:
M
[
a
]
=
(
1
0
0
⋯
a
0
0
⋯
a
2
0
0
⋯
⋮
⋮
⋮
⋱
)
{\displaystyle M[a]=\left({\begin{array}{cccc}1&0&0&\cdots \\a&0&0&\cdots \\a^{2}&0&0&\cdots \\\vdots &\vdots &\vdots &\ddots \end{array}}\right)}
The Carleman matrix of the identity function is:
M
x
[
x
]
=
(
1
0
0
⋯
0
1
0
⋯
0
0
1
⋯
⋮
⋮
⋮
⋱
)
{\displaystyle M_{x}[x]=\left({\begin{array}{cccc}1&0&0&\cdots \\0&1&0&\cdots \\0&0&1&\cdots \\\vdots &\vdots &\vdots &\ddots \end{array}}\right)}
The Carleman matrix of a constant addition is:
M
x
[
a
+
x
]
=
(
1
0
0
⋯
a
1
0
⋯
a
2
2
a
1
⋯
⋮
⋮
⋮
⋱
)
{\displaystyle M_{x}[a+x]=\left({\begin{array}{cccc}1&0&0&\cdots \\a&1&0&\cdots \\a^{2}&2a&1&\cdots \\\vdots &\vdots &\vdots &\ddots \end{array}}\right)}
The Carleman matrix of the successor function is equivalent to the Binomial coefficient :
M
x
[
1
+
x
]
=
(
1
0
0
0
⋯
1
1
0
0
⋯
1
2
1
0
⋯
1
3
3
1
⋯
⋮
⋮
⋮
⋮
⋱
)
{\displaystyle M_{x}[1+x]=\left({\begin{array}{ccccc}1&0&0&0&\cdots \\1&1&0&0&\cdots \\1&2&1&0&\cdots \\1&3&3&1&\cdots \\\vdots &\vdots &\vdots &\vdots &\ddots \end{array}}\right)}
M
x
[
1
+
x
]
j
k
=
(
j
k
)
{\displaystyle M_{x}[1+x]_{jk}={\binom {j}{k}}}
The Carleman matrix of the logarithm is related to the (signed) Stirling numbers of the first kind scaled by factorials :
M
x
[
log
(
1
+
x
)
]
=
(
1
0
0
0
0
⋯
0
1
−
1
2
1
3
−
1
4
⋯
0
0
1
−
1
11
12
⋯
0
0
0
1
−
3
2
⋯
0
0
0
0
1
⋯
⋮
⋮
⋮
⋮
⋮
⋱
)
{\displaystyle M_{x}[\log(1+x)]=\left({\begin{array}{cccccc}1&0&0&0&0&\cdots \\0&1&-{\frac {1}{2}}&{\frac {1}{3}}&-{\frac {1}{4}}&\cdots \\0&0&1&-1&{\frac {11}{12}}&\cdots \\0&0&0&1&-{\frac {3}{2}}&\cdots \\0&0&0&0&1&\cdots \\\vdots &\vdots &\vdots &\vdots &\vdots &\ddots \end{array}}\right)}
M
x
[
log
(
1
+
x
)
]
j
k
=
s
(
k
,
j
)
j
!
k
!
{\displaystyle M_{x}[\log(1+x)]_{jk}=s(k,j){\frac {j!}{k!}}}
The Carleman matrix of the logarithm is related to the (unsigned) Stirling numbers of the first kind scaled by factorials :
M
x
[
−
log
(
1
−
x
)
]
=
(
1
0
0
0
0
⋯
0
1
1
2
1
3
1
4
⋯
0
0
1
1
11
12
⋯
0
0
0
1
3
2
⋯
0
0
0
0
1
⋯
⋮
⋮
⋮
⋮
⋮
⋱
)
{\displaystyle M_{x}[-\log(1-x)]=\left({\begin{array}{cccccc}1&0&0&0&0&\cdots \\0&1&{\frac {1}{2}}&{\frac {1}{3}}&{\frac {1}{4}}&\cdots \\0&0&1&1&{\frac {11}{12}}&\cdots \\0&0&0&1&{\frac {3}{2}}&\cdots \\0&0&0&0&1&\cdots \\\vdots &\vdots &\vdots &\vdots &\vdots &\ddots \end{array}}\right)}
M
x
[
−
log
(
1
−
x
)
]
j
k
=
|
s
(
k
,
j
)
|
j
!
k
!
{\displaystyle M_{x}[-\log(1-x)]_{jk}=|s(k,j)|{\frac {j!}{k!}}}
The Carleman matrix of the exponential function is related to the Stirling numbers of the second kind scaled by factorials :
M
x
[
exp
(
x
)
−
1
]
=
(
1
0
0
0
0
⋯
0
1
1
2
1
6
1
24
⋯
0
0
1
1
7
12
⋯
0
0
0
1
3
2
⋯
0
0
0
0
1
⋯
⋮
⋮
⋮
⋮
⋮
⋱
)
{\displaystyle M_{x}[\exp(x)-1]=\left({\begin{array}{cccccc}1&0&0&0&0&\cdots \\0&1&{\frac {1}{2}}&{\frac {1}{6}}&{\frac {1}{24}}&\cdots \\0&0&1&1&{\frac {7}{12}}&\cdots \\0&0&0&1&{\frac {3}{2}}&\cdots \\0&0&0&0&1&\cdots \\\vdots &\vdots &\vdots &\vdots &\vdots &\ddots \end{array}}\right)}
M
x
[
exp
(
x
)
−
1
]
j
k
=
S
(
k
,
j
)
j
!
k
!
{\displaystyle M_{x}[\exp(x)-1]_{jk}=S(k,j){\frac {j!}{k!}}}
The Carleman matrix of exponential functions is:
M
x
[
exp
(
a
x
)
]
=
(
1
0
0
0
⋯
1
a
a
2
2
a
3
6
⋯
1
2
a
2
a
2
4
a
3
3
⋯
1
3
a
9
a
2
2
9
a
3
2
⋯
⋮
⋮
⋮
⋮
⋱
)
{\displaystyle M_{x}[\exp(ax)]=\left({\begin{array}{ccccc}1&0&0&0&\cdots \\1&a&{\frac {a^{2}}{2}}&{\frac {a^{3}}{6}}&\cdots \\1&2a&2a^{2}&{\frac {4a^{3}}{3}}&\cdots \\1&3a&{\frac {9a^{2}}{2}}&{\frac {9a^{3}}{2}}&\cdots \\\vdots &\vdots &\vdots &\vdots &\ddots \end{array}}\right)}
M
x
[
exp
(
a
x
)
]
j
k
=
(
j
a
)
k
k
!
{\displaystyle M_{x}[\exp(ax)]_{jk}={\frac {(ja)^{k}}{k!}}}
The Carleman matrix of a constant multiple is:
M
x
[
c
x
]
=
(
1
0
0
⋯
0
c
0
⋯
0
0
c
2
⋯
⋮
⋮
⋮
⋱
)
{\displaystyle M_{x}[cx]=\left({\begin{array}{cccc}1&0&0&\cdots \\0&c&0&\cdots \\0&0&c^{2}&\cdots \\\vdots &\vdots &\vdots &\ddots \end{array}}\right)}
The Carleman matrix of a linear function is:
M
x
[
a
+
c
x
]
=
(
1
0
0
⋯
a
c
0
⋯
a
2
2
a
c
c
2
⋯
⋮
⋮
⋮
⋱
)
{\displaystyle M_{x}[a+cx]=\left({\begin{array}{cccc}1&0&0&\cdots \\a&c&0&\cdots \\a^{2}&2ac&c^{2}&\cdots \\\vdots &\vdots &\vdots &\ddots \end{array}}\right)}
The Carleman matrix of a function
f
(
x
)
=
∑
k
=
1
∞
f
k
x
k
{\displaystyle f(x)=\sum _{k=1}^{\infty }f_{k}x^{k}}
is:
M
[
f
]
=
(
1
0
0
⋯
0
f
1
f
2
⋯
0
0
f
1
2
⋯
⋮
⋮
⋮
⋱
)
{\displaystyle M[f]=\left({\begin{array}{cccc}1&0&0&\cdots \\0&f_{1}&f_{2}&\cdots \\0&0&f_{1}^{2}&\cdots \\\vdots &\vdots &\vdots &\ddots \end{array}}\right)}
The Carleman matrix of a function
f
(
x
)
=
∑
k
=
0
∞
f
k
x
k
{\displaystyle f(x)=\sum _{k=0}^{\infty }f_{k}x^{k}}
is:
M
[
f
]
=
(
1
0
0
⋯
f
0
f
1
f
2
⋯
f
0
2
2
f
0
f
1
f
1
2
+
2
f
0
f
2
⋯
⋮
⋮
⋮
⋱
)
{\displaystyle M[f]=\left({\begin{array}{cccc}1&0&0&\cdots \\f_{0}&f_{1}&f_{2}&\cdots \\f_{0}^{2}&2f_{0}f_{1}&f_{1}^{2}+2f_{0}f_{2}&\cdots \\\vdots &\vdots &\vdots &\ddots \end{array}}\right)}
Carleman Approximation [ ]
Consider the following autonomous nonlinear system:
x
˙
=
f
(
x
)
+
∑
j
=
1
m
g
j
(
x
)
d
j
(
t
)
{\displaystyle {\dot {x}}=f(x)+\sum _{j=1}^{m}g_{j}(x)d_{j}(t)}
where
x
∈
R
n
{\displaystyle x\in R^{n}}
denotes the system state vector. Also,
f
{\displaystyle f}
and
g
i
{\displaystyle g_{i}}
's are known analytic vector functions, and
d
j
{\displaystyle d_{j}}
is the
j
t
h
{\displaystyle j^{th}}
element of an unknown disturbance to the system.
At the desired nominal point, the nonlinear functions in the above system can be approximated by Taylor expansion
f
(
x
)
≃
f
(
x
0
)
+
∑
k
=
1
η
1
k
!
∂
f
[
k
]
∣
x
=
x
0
(
x
−
x
0
)
[
k
]
{\displaystyle f(x)\simeq f(x_{0})+\sum _{k=1}^{\eta }{\frac {1}{k!}}\partial f_{[k]}\mid _{x=x_{0}}(x-x_{0})^{[k]}}
where
∂
f
[
k
]
∣
x
=
x
0
{\displaystyle \partial f_{[k]}\mid _{x=x_{0}}}
is the
k
t
h
{\displaystyle k^{th}}
partial derivative of
f
(
x
)
{\displaystyle f(x)}
with respect to
x
{\displaystyle x}
at
x
=
x
0
{\displaystyle x=x_{0}}
and
x
[
k
]
{\displaystyle x^{[k]}}
denotes the
k
t
h
{\displaystyle k^{th}}
Kronecker product.
Without loss of generality, we assume that
x
0
{\displaystyle x_{0}}
is at the origin.
Applying Taylor approximation to the system, we obtain
x
˙
≃
∑
k
=
0
η
A
k
x
[
k
]
+
∑
j
=
1
m
∑
k
=
0
η
B
j
k
x
[
k
]
d
j
{\displaystyle {\dot {x}}\simeq \sum _{k=0}^{\eta }A_{k}x^{[k]}+\sum _{j=1}^{m}\sum _{k=0}^{\eta }B_{jk}x^{[k]}dj}
where
A
k
=
1
k
!
∂
f
[
k
]
∣
x
=
0
{\displaystyle A_{k}={\frac {1}{k!}}\partial f_{[k]}\mid _{x=0}}
and
B
j
k
=
1
k
!
∂
g
j
[
k
]
∣
x
=
0
{\displaystyle B_{jk}={\frac {1}{k!}}\partial g_{j[k]}\mid _{x=0}}
.
Consequently, the following linear system for higher orders of the original states are obtained:
d
(
x
[
i
]
)
d
t
≃
∑
k
=
0
η
−
i
+
1
A
i
,
k
x
[
k
+
i
−
1
]
+
∑
j
=
1
m
∑
k
=
0
η
−
i
+
1
B
j
,
i
,
k
x
[
k
+
i
−
1
]
d
j
{\displaystyle {\frac {d(x^{[i]})}{dt}}\simeq \sum _{k=0}^{\eta -i+1}A_{i,k}x^{[k+i-1]}+\sum _{j=1}^{m}\sum _{k=0}^{\eta -i+1}B_{j,i,k}x^{[k+i-1]}d_{j}}
where
A
i
,
k
=
∑
l
=
0
i
−
1
I
n
[
l
]
⊗
A
k
⊗
I
n
[
i
−
1
−
l
]
{\displaystyle A_{i,k}=\sum _{l=0}^{i-1}I_{n}^{[l]}\otimes A_{k}\otimes I_{n}^{[i-1-l]}}
, and similarly
B
j
,
i
,
κ
=
∑
l
=
0
i
−
1
I
n
[
l
]
⊗
B
j
,
κ
⊗
I
n
[
i
−
1
−
l
]
{\displaystyle B_{j,i,\kappa }=\sum _{l=0}^{i-1}I_{n}^{[l]}\otimes B_{j,\kappa }\otimes I_{n}^{[i-1-l]}}
.
Employing Kronecker product operator, the approximated system is presented in the following form
x
˙
⊗
≃
A
x
⊗
+
∑
j
=
1
m
[
B
j
x
⊗
d
j
+
B
j
0
d
j
]
+
A
r
{\displaystyle {\dot {x}}_{\otimes }\simeq Ax_{\otimes }+\sum _{j=1}^{m}[B_{j}x_{\otimes }d_{j}+B_{j0}d_{j}]+A_{r}}
where
x
⊗
=
[
x
T
x
[
2
]
T
.
.
.
x
[
η
]
T
]
T
{\displaystyle x_{\otimes }={\begin{bmatrix}x^{T}&x^{{[2]}^{T}}&...&x^{{[\eta ]}^{T}}\end{bmatrix}}^{T}}
, and
A
,
B
j
,
A
r
{\displaystyle A,B_{j},A_{r}}
and
B
j
,
0
{\displaystyle B_{j,0}}
matrices are defined in (Hashemian and Armaou 2015).[2]
See also [ ]
References [ ]
^ Jabotinsky, Eri (1963). "Analytic Iteration". Transactions of the American Mathematical Society . 108 (3): 457–477. JSTOR 1993593 .
^ Hashemian, N.; Armaou, A. (2015). "Fast Moving Horizon Estimation of nonlinear processes via Carleman linearization". IEEE Proceedings : 3379–3385. doi :10.1109/ACC.2015.7171854 . ISBN 978-1-4799-8684-2 . S2CID 13251259 .
R Aldrovandi, Special Matrices of Mathematical Physics : Stochastic, Circulant and Bell Matrices, World Scientific, 2001. (preview )
R. Aldrovandi, L. P. Freitas, Continuous Iteration of Dynamical Maps , online preprint, 1997.
P. Gralewicz, K. Kowalski, Continuous time evolution from iterated maps and Carleman linearization , online preprint, 2000.
K Kowalski and W-H Steeb, Nonlinear Dynamical Systems and Carleman Linearization , World Scientific, 1991. (preview )
D. Knuth, Convolution Polynomials arXiv online print, 1992
Jabotinsky, Eri: Representation of Functions by Matrices. Application to Faber Polynomials in: Proceedings of the American Mathematical Society, Vol. 4, No. 4 (Aug., 1953), pp. 546– 553 Stable jstor-URL
Explicitly constrained entries Constant Conditions on eigenvalues or eigenvectors Satisfying conditions on products or inverses With specific applications Used in statistics Used in graph theory Used in science and engineering Related terms
List of matrices
Category:Matrices