Semisimple operator
In mathematics, a linear operator T on a vector space is semisimple if every T-invariant subspace has a complementary T-invariant subspace;[1] in other words, the vector space is a semisimple representation of the operator T. Equivalently, a linear operator is semisimple if the minimal polynomial of it is a product of distinct irreducible polynomials.[2]
A linear operator on a finite dimensional vector space over an algebraically closed field is semisimple if and only if it is diagonalizable.[1][3]
Over a perfect field, the Jordan–Chevalley decomposition expresses an endomorphism as a sum of a semisimple endomorphism s and a nilpotent endomorphism n such that both s and n are polynomials in x.
See also[]
Notes[]
- ^ Jump up to: a b Lam (2001), p. 39
- ^ Jacobson 1979, A paragraph before Ch. II, § 5, Theorem 11.
- ^ This is trivial by the definition in terms of a minimal polynomial but can be seen more directly as follows. Such an operator always has an eigenvector; if it is, in addition, semi-simple, then it has a complementary invariant hyperplane, which itself has an eigenvector, and thus by induction is diagonalizable. Conversely, diagonalizable operators are easily seen to be semi-simple, as invariant subspaces are direct sums of eigenspaces, and any basis for this space can be extended to an eigenbasis.
References[]
- Hoffman, Kenneth; Kunze, Ray (1971). "Semi-Simple operators". Linear algebra (2nd ed.). Englewood Cliffs, N.J.: Prentice-Hall, Inc. MR 0276251.
- Jacobson, Nathan, Lie algebras, Republication of the 1962 original. Dover Publications, Inc., New York, 1979. ISBN 0-486-63832-4
- Lam, Tsit-Yuen (2001). A first course in noncommutative rings. Graduate texts in mathematics. 131 (2 ed.). Springer. ISBN 0-387-95183-0.
Categories:
- Linear algebra
- Invariant subspaces
- Mathematics stubs