For the scalar product or dot product of coordinate vectors, see dot product.
Geometric interpretation of the angle between two vectors defined using an inner product
Scalar product spaces, over any field, have "scalar products" that are symmetrical and linear in the first argument. Hermitian product spaces are restricted to the field of complex numbers and have "Hermitian products" that are conjugate-symmetrical and linear in the first argument. Inner product spaces may be defined over any field, having "inner products" that are linear in the first argument, conjugate-symmetrical, and positive-definite. Unlike inner products, scalar products and Hermitian products need not be positive-definite.
In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space[1][2]) is a vector space with a binary operation called an inner product. This operation associates each pair of vectors in the space with a scalar quantity known as the inner product of the vectors, often denoted using angle brackets (as in ).[3] Inner products allow the rigorous introduction of intuitive geometrical notions, such as the length of a vector or the angle between two vectors. They also provide the means of defining orthogonality between vectors (zero inner product). Inner product spaces generalize Euclidean spaces (in which the inner product is the dot product,[4] also known as the scalar product) to vector spaces of any (possibly infinite) dimension, and are studied in functional analysis. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces. The first usage of the concept of a vector space with an inner product is due to Giuseppe Peano, in 1898.[5]
An inner product naturally induces an associated norm, ( and are the norms of and in the picture), which canonically makes every inner product space into a normed vector space. If this normed space is also complete (i.e., a Banach space) then the inner product space is called a Hilbert space.[1] If an inner product space is not a Hilbert space then it can be "extended" to a Hilbert space called a completion. Explicitly, this means that is linearly and isometricallyembedded onto a dense vector subspace of and that the inner product on is the unique continuous extension of the original inner product .[1][6]
Conditions (1) and (2) are the defining properties of a Hermitian form, which is a special type of sesquilinear form.[1] A sesquilinear form is Hermitian if and only if is real for all [1] In particular, condition (2) implies[proof 2] that is a real number for all
The above three conditions are the defining properties of an inner product, which is why an inner product is sometimes (equivalently) defined as being a positive-definite Hermitian form.
An inner product can equivalently be defined as a positive-definite sesquilinear form.[1][note 4]
Assuming (1) holds, condition (3) will hold if and only if both conditions (4) and (5) below hold:[6][1]
For every vector conjugate symmetry guarantees which implies that is a real number. It also guarantees that for all vectors and
where denotes the real part of a scalar
Conjugate symmetry and linearity in the first variable imply[proof 3]conjugate linearity, also known as antilinearity, in the second argument; explicitly, this means that for any vectors and any scalar
(Antilinearity in the 2nd argument)
This shows that every inner product is also a sesquilinear form and that inner products are additivity in each argument, meaning that for all vectors
Additivity in each argument implies the following important generalization of the familiar square expansion:
where
In the case of conjugate symmetry reduces to symmetry and so sesquilinearity reduces to bilinearity. Hence an inner product on a real vector space is a positive-definite symmetric bilinear form. That is, when then
(Symmetry)
and the binomial expansion becomes:
Alternative definitions, notations and remarks[]
A common special case of the inner product, the scalar product or dot product, is written with a centered dot
Some authors, especially in physics and matrix algebra, prefer to define the inner product and the sesquilinear form with linearity in the second argument rather than the first. Then the first argument becomes conjugate linear, rather than the second. In those disciplines, we would write the inner product as (the bra–ket notation of quantum mechanics), respectively (dot product as a case of the convention of forming the matrix product as the dot products of rows of with columns of ). Here, the kets and columns are identified with the vectors of and the bras and rows with the linear functionals (covectors) of the dual space with conjugacy associated with duality. This reverse order is now occasionally followed in the more abstract literature,[10] taking to be conjugate linear in rather than A few instead find a middle ground by recognizing both and as distinct notations—differing only in which argument is conjugate linear.
There are various technical reasons why it is necessary to restrict the base field to and in the definition. Briefly, the base field has to contain an orderedsubfield in order for non-negativity to make sense,[11] and therefore has to have characteristic equal to 0 (since any ordered field has to have such characteristic). This immediately excludes finite fields. The basefield has to have additional structure, such as a distinguished automorphism. More generally, any quadratically closed subfield of or will suffice for this purpose (for example, algebraic numbers, constructible numbers). However, in the cases where it is a proper subfield (that is, neither nor ), even finite-dimensional inner product spaces will fail to be metrically complete. In contrast, all finite-dimensional inner product spaces over or such as those used in quantum computation, are automatically metrically complete (and hence Hilbert spaces).
In some cases, one needs to consider non-negative semi-definite sesquilinear forms. This means that is only required to be non-negative. Treatment for these cases are illustrated below.
Some examples[]
Real and complex numbers[]
Among the simplest examples of inner product spaces are and
The real numbers are a vector space over that becomes a real inner product space when endowed with standard multiplication as its real inner product:[4]
The complex numbers are a vector space over that becomes a complex inner product space when endowed with the complex inner product
Unlike with the real numbers, the assignment does not define a complex inner product on
Euclidean vector space[]
More generally, the real -space with the dot product is an inner product space,[4] an example of a Euclidean vector space.
The general form of an inner product on is known as the Hermitian form and is given by
where is any Hermitianpositive-definite matrix and is the conjugate transpose of For the real case, this corresponds to the dot product of the results of directionally-different scaling of the two vectors, with positive scale factors and orthogonal directions of scaling. It is a weighted-sum version of the dot product with positive weights—up to an orthogonal transformation.
Hilbert space[]
The article on Hilbert spaces has several examples of inner product spaces, wherein the metric induced by the inner product yields a complete metric space. An example of an inner product space which induces an incomplete metric is the space of continuous complex valued functions and on the interval The inner product is
This space is not complete; consider for example, for the interval [−1, 1] the sequence of continuous "step" functions, defined by:
This sequence is a Cauchy sequence for the norm induced by the preceding inner product, which does not converge to a continuous function.
is an inner product.[12][13][14] In this case, if and only if (that is, almost surely), where denotes the probability of the event. This definition of expectation as inner product can be extended to random vectors as well.
Real matrices[]
For real square matrices of the same size, with transpose as conjugation
is an inner product.
Vector spaces with forms[]
On an inner product space, or more generally a vector space with a nondegenerate form (hence an isomorphism ), vectors can be sent to covectors (in coordinates, via transpose), so that one can take the inner product and outer product of two vectors—not simply of a vector and a covector.
Basic results, terminology, and definitions[]
Norm[]
Every inner product space induces a norm, called its canonical norm, that is defined by[4]
with equality if and only if and are linearly dependent. In the Russian mathematical literature, this inequality is also known as the Cauchy–Bunyakovsky inequality or the Cauchy–Bunyakovsky–Schwarz inequality.
When is a real number then the Cauchy–Schwarz inequality guarantees that lies in the domain of the inverse trigonometric function and so the (non oriented) angle between and can be defined as:
Two vectors and are called orthogonal, written if their inner product is zero: This happens if and only if for all scalars [15] Moreover, for the scalar minimizes with value
For a complex − but not real − inner product space a linear operator is identically if and only if for every [15]
Orthogonal complement
The orthogonal complement of a subset is the set of all vectors such that and are orthogonal for all ; that is, it is the set
This set is always a closed vector subspace of and if the closure of in is a vector subspace then
The proof of the identity requires only expressing the definition of norm in terms of the inner product and multiplying out, using the property of additivity of each component.
The name Pythagorean theorem arises from the geometric interpretation in Euclidean geometry.
Ptolemy's inequality is, in fact, a necessary and sufficient condition for the existence of a inner product corresponding to a given norm. In detail, Isaac Jacob Schoenberg proved in 1952 that, given any real, seminormed space, if its seminorm is ptolemaic, then the seminorm is the norm associated with an inner product.[16]
Real and complex parts of inner products[]
Suppose that is an inner product on (so it is antilinear in its second argument). The polarization identity shows that the real part of the inner product is
If is a real vector space then
and the imaginary part (also called the complex part) of is always 0.
Assume for the rest of this section that is a complex vector space.
The polarization identity for complex vector spaces shows that
The map defined by for all satisfies the axioms of the inner product except that it is antilinear in its first, rather than its second, argument. The real part of both and are equal to but the inner products differ in their complex part:
The last equality is similar to the formula expressing a linear functional in terms of its real part.
Real vs. complex inner products
Let denote considered as a vector space over the real numbers rather than complex numbers.
The real part of the complex inner product is the map which necessarily forms a real inner product on the real vector space Every inner product on a real vector space is a bilinear and symmetric map.
For example, if with inner product where is a vector space over the field then is a vector space over and is the dot product where is identified with the point (and similarly for ). Also, had been instead defined to be the symmetric map (rather than the usual conjugate symmetric map) then its real part would not be the dot product; furthermore, without the complex conjugate, if but then so the assignment does not define a norm.
The next examples show that although real and complex inner products have many properties and results in common, they are not entirely interchangeable.
For instance, if then but the next example shows that the converse is in general not true.
Given any the vector (which is the vector rotated by 90°) belongs to and so also belongs to (although scalar multiplication of by is not defined in it is still true that the vector in denoted by is an element of ). For the complex inner product, whereas for the real inner product the value is always
If has the inner product mentioned above, then the map defined by is a non-zero linear map (linear for both and ) that denotes rotation by in the plane. This map satisfies for all vectors where had this inner product been complex instead of real, then this would have been enough to conclude that this linear map is identically (i.e. that ), which rotation is certainly not. In contrast, for all non-zero the map satisfies
Orthonormal sequences[]
Let be a finite dimensional inner product space of dimension Recall that every basis of consists of exactly linearly independent vectors. Using the Gram–Schmidt process we may start with an arbitrary basis and transform it into an orthonormal basis. That is, into a basis in which all the elements are orthogonal and have unit norm. In symbols, a basis is orthonormal if for every and for each index
This definition of orthonormal basis generalizes to the case of infinite-dimensional inner product spaces in the following way. Let be any inner product space. Then a collection
is a basis for if the subspace of generated by finite linear combinations of elements of is dense in (in the norm induced by the inner product). Say that is an orthonormal basis for if it is a basis and
if and for all
Using an infinite-dimensional analog of the Gram-Schmidt process one may show:
Theorem. Any separable inner product space has an orthonormal basis.
The two previous theorems raise the question of whether all inner product spaces have an orthonormal basis. The answer, it turns out is negative. This is a non-trivial result, and is proved below. The following proof is taken from Halmos's A Hilbert Space Problem Book (see the references).[citation needed]
Recall that the dimension of an inner product space is the cardinality of a maximal orthonormal system that it contains (by Zorn's lemma it contains at least one, and any two have the same cardinality). An orthonormal basis is certainly a maximal orthonormal system but the converse need not hold in general. If is a dense subspace of an inner product space then any orthonormal basis for is automatically an orthonormal basis for Thus, it suffices to construct an inner product space with a dense subspace whose dimension is strictly smaller than that of
Let be a Hilbert space of dimension (for instance, ). Let be an orthonormal basis of so Extend to a Hamel basis for where Since it is known that the Hamel dimension of is the cardinality of the continuum, it must be that
Let be a Hilbert space of dimension (for instance, ). Let be an orthonormal basis for and let be a bijection. Then there is a linear transformation such that for and for .
Let and let be the graph of Let be the closure of in ; we will show Since for any we have it follows that
Next, if then for some so ; since as well, we also have It follows that so and is dense in
Finally, is a maximal orthonormal set in ; if
for all then so is the zero vector in Hence the dimension of is whereas it is clear that the dimension of is This completes the proof.
Theorem. Let be a separable inner product space and an orthonormal basis of Then the map
is an isometric linear map with a dense image.
This theorem can be regarded as an abstract form of Fourier series, in which an arbitrary orthonormal basis plays the role of the sequence of trigonometric polynomials. Note that the underlying index set can be taken to be any countable set (and in fact any set whatsoever, provided is defined appropriately, as is explained in the article Hilbert space). In particular, we obtain the following result in the theory of Fourier series:
Theorem. Let be the inner product space Then the sequence (indexed on set of all integers) of continuous functions
is an orthonormal basis of the space with the inner product. The mapping
is an isometric linear map with dense image.
Orthogonality of the sequence follows immediately from the fact that if then
Normality of the sequence is by design, that is, the coefficients are so chosen so that the norm comes out to 1. Finally the fact that the sequence has a dense algebraic span, in the inner product norm, follows from the fact that the sequence has a dense algebraic span, this time in the space of continuous periodic functions on with the uniform norm. This is the content of the Weierstrass theorem on the uniform density of trigonometric polynomials.
Operators on inner product spaces[]
Several types of linear maps between inner product spaces and are of relevance:
Continuous linear maps: is linear and continuous with respect to the metric defined above, or equivalently, is linear and the set of non-negative reals where ranges over the closed unit ball of is bounded.
Symmetric linear operators: is linear and for all
Isometries: is linear and for all or equivalently, is linear and for all All isometries are injective. Isometries are morphisms between inner product spaces, and morphisms of real inner product spaces are orthogonal transformations (compare with orthogonal matrix).
Isometrical isomorphisms: is an isometry which is surjective (and hence bijective). Isometrical isomorphisms are also known as unitary operators (compare with unitary matrix).
From the point of view of inner product space theory, there is no need to distinguish between two spaces which are isometrically isomorphic. The spectral theorem provides a canonical form for symmetric, unitary and more generally normal operators on finite dimensional inner product spaces. A generalization of the spectral theorem holds for continuous normal operators in Hilbert spaces.
Generalizations[]
Any of the axioms of an inner product may be weakened, yielding generalized notions. The generalizations that are closest to inner products occur where bilinearity and conjugate symmetry are retained, but positive-definiteness is weakened.
Degenerate inner products[]
Main article: Krein space
If is a vector space and a semi-definite sesquilinear form, then the function:
makes sense and satisfies all the properties of norm except that does not imply (such a functional is then called a semi-norm). We can produce an inner product space by considering the quotient The sesquilinear form factors through
This construction is used in numerous contexts. The Gelfand–Naimark–Segal construction is a particularly important example of the use of this technique. Another example is the representation of semi-definite kernels on arbitrary sets.
Alternatively, one may require that the pairing be a nondegenerate form, meaning that for all non-zero there exists some such that though need not equal ; in other words, the induced map to the dual space is injective. This generalization is important in differential geometry: a manifold whose tangent spaces have an inner product is a Riemannian manifold, while if this is related to nondegenerate conjugate symmetric form the manifold is a pseudo-Riemannian manifold. By Sylvester's law of inertia, just as every inner product is similar to the dot product with positive weights on a set of vectors, every nondegenerate conjugate symmetric form is similar to the dot product with nonzero weights on a set of vectors, and the number of positive and negative weights are called respectively the positive index and negative index. Product of vectors in Minkowski space is an example of indefinite inner product, although, technically speaking, it is not an inner product according to the standard definition above. Minkowski space has four dimensions and indices 3 and 1 (assignment of "+" and "−" to them differs depending on conventions).
Purely algebraic statements (ones that do not use positivity) usually only rely on the nondegeneracy (the injective homomorphism ) and thus hold more generally.
Related products[]
The term "inner product" is opposed to outer product, which is a slightly more general opposite. Simply, in coordinates, the inner product is the product of a covector with an vector, yielding a matrix (a scalar), while the outer product is the product of an vector with a covector, yielding an matrix. Note that the outer product is defined for different dimensions, while the inner product requires the same dimension. If the dimensions are the same, then the inner product is the trace of the outer product (trace only being properly defined for square matrices). In an informal summary: "inner is horizontal times vertical and shrinks down, outer is vertical times horizontal and expands out".
More abstractly, the outer product is the bilinear map sending a vector and a covector to a rank 1 linear transformation (simple tensor of type (1, 1)), while the inner product is the bilinear evaluation map given by evaluating a covector on a vector; the order of the domain vector spaces here reflects the covector/vector distinction.
As a further complication, in geometric algebra the inner product and the exterior (Grassmann) product are combined in the geometric product (the Clifford product in a Clifford algebra) – the inner product sends two vectors (1-vectors) to a scalar (a 0-vector), while the exterior product sends two vectors to a bivector (2-vector) – and in this context the exterior product is usually called the outer product (alternatively, wedge product). The inner product is more correctly called a scalar product in this context, as the nondegenerate quadratic form in question need not be positive definite (need not be an inner product).
^By combining the linear in the first argument property with the conjugate symmetry property you get conjugate-linear in the second argument: This is how the inner product was originally defined and is still used in some old-school math communities. However, all of engineering and computer science, and most of physics and modern mathematics now define the inner product to be linear in the second argument and conjugate-linear in the first argument because this is more compatible with several other conventions in mathematics. Notably, for any inner product, there is some hermitian, positive-definite matrix such that (Here, is the conjugate transpose of )
^This means that and for all vectors and all scalars
^A bar over an expression denotes complex conjugation; for instance, is the complex conjugation of For real values, and conjugate symmetry is equivalent to symmetry.
^This is because condition (1) (i.e. linearity in the first argument) and positive definiteness implies that is always a real number. And as mentioned before, a sesquilinear form is Hermitian if and only if is real for all
Axler, Sheldon (1997). Linear Algebra Done Right (2nd ed.). Berlin, New York: Springer-Verlag. ISBN978-0-387-98258-8.
Emch, Gerard G. (1972). Algebraic Methods in Statistical Mechanics and Quantum Field Theory. Wiley-Interscience. ISBN978-0-471-23900-0.
Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN978-0-07-054236-5. OCLC21163277.
Schaefer, Helmut H.; (1999). Topological Vector Spaces. GTM. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN978-1-4612-7155-0. OCLC840278135.
Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN978-0-12-622760-4. OCLC175294365.
Swartz, Charles (1992). An introduction to Functional Analysis. New York: M. Dekker. ISBN978-0-8247-8643-4. OCLC24909067.
Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN978-0-486-45352-1. OCLC853623322.
Young, Nicholas (1988). An Introduction to Hilbert Space. Cambridge University Press. ISBN978-0-521-33717-5.