Inner product space
In mathematics, an inner product space is a real or complex vector space endowed with an operation called an inner product. The inner product of two vectors in the space is a scalar, often denoted with angle brackets such as in. Inner products allow formal definitions of intuitive geometric notions, such as lengths, angles, and orthogonality of vectors. Inner product spaces generalize Euclidean vector spaces, in which the inner product is the dot product or scalar product of Cartesian coordinates. Inner product spaces of infinite dimensions are widely used in functional analysis. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces. The first usage of the concept of a vector space with an inner product is due to Giuseppe Peano, in 1898.
An inner product naturally induces an associated norm, ; so, every inner product space is a normed vector space. If this normed space is also complete then the inner product space is a Hilbert space. If an inner product space is not a Hilbert space, it can be extended by completion to a Hilbert space This means that is a linear subspace of the inner product of is the restriction of that of and is dense in for the topology defined by the norm.
Definition
In this article, denotes a field that is either the real numbers or the complex numbers A scalar is thus an element of. A bar over an expression representing a scalar denotes the complex conjugate of this scalar. A zero vector is denoted for distinguishing it from the scalar.An inner product space is a vector space over the field together with an inner product, that is, a map
that satisfies the following three properties for all vectors and all scalars
- Conjugate symmetry: As if and only if is real, conjugate symmetry implies that is always a real number. If is, conjugate symmetry is just symmetry.
- Linearity in the first argument:
- Positive-definiteness: if is not zero, then .
Basic properties
In the following properties, which result almost immediately from the definition of an inner product, and are arbitrary vectors, and and are arbitrary scalars.- is real and nonnegative.
- if and only if
- , that is conjugate-linearity.
- where
Over, conjugate-symmetry reduces to symmetry, and sesquilinearity reduces to bilinearity. Hence an inner product on a real vector space is a positive-definite symmetric bilinear form. The binomial expansion of a square becomes
Notation
Several notations are used for inner products, including,
and
, as well as the usual dot product.
Convention variant
Some authors, especially in physics and matrix algebra, prefer to define inner products and sesquilinear forms with linearity in the second argument rather than the first. Then the first argument becomes conjugate linear, rather than the second. Bra–ket notation in quantum mechanics also uses slightly different notation, i.e., where.Examples
Real and complex numbers
Among the simplest examples of inner product spaces are andThe real numbers are a vector space over that becomes an inner product space with arithmetic multiplication as its inner product:
The complex numbers are a vector space over that becomes an inner product space with the inner product
Unlike with the real numbers, the assignment does define a complex inner product on
Euclidean vector space
More generally, the real -space with the dot product is an inner product space, an example of a Euclidean vector space.where is the transpose of
A function is an inner product on if and only if there exists a symmetric positive-definite matrix such that for all If is the identity matrix then is the dot product. For another example, if and is positive-definite then for any
As mentioned earlier, every inner product on is of this form.
Complex coordinate space
The general form of an inner product on is known as the Hermitian form and is given bywhere is any Hermitian positive-definite matrix and is the conjugate transpose of For the real case, this corresponds to the dot product of the results of directionally-different scaling of the two vectors, with positive scale factors and orthogonal directions of scaling. It is a weighted-sum version of the dot product with positive weights—up to an orthogonal transformation.
Hilbert space
The article on Hilbert spaces has several examples of inner product spaces, wherein the metric induced by the inner product yields a complete metric space. An example of an inner product space which induces an incomplete metric is the space of continuous complex valued functions and on the interval The inner product isThis space is not complete; consider for example, for the interval the sequence of continuous "step" functions, defined by:
This sequence is a Cauchy sequence for the norm induced by the preceding inner product, which does not converge to a function.
Random variables
For real random variables and the expected value of their productis an inner product. In this case, if and only if , where denotes the probability of the event. This definition of expectation as inner product can be extended to random vectors as well.
Complex matrices
The inner product for complex square matrices of the same size is the Frobenius inner product. Since trace and transposition are linear and the conjugation is on the second matrix, it is a sesquilinear operator. We further get Hermitian symmetry by,Finally, since for nonzero,, we get that the Frobenius inner product is positive definite too, and so is an inner product.
Vector spaces with forms
On an inner product space, or more generally a vector space with a nondegenerate form, vectors can be sent to covectors, so that one can take the inner product and outer product of two vectors—not simply of a vector and a covector.Basic results, terminology, and definitions
Norm properties
Every inner product space induces a norm, called its, that is defined byWith this norm, every inner product space becomes a normed vector space.
So, every general property of normed vector spaces applies to inner product spaces.
In particular, one has the following properties:
Orthogonality
Real and complex parts of inner products
Suppose that is an inner product on . The polarization identity shows that the real part of the inner product isIf is a real vector space then
and the imaginary part of is always
Assume for the rest of this section that is a complex vector space.
The polarization identity for complex vector spaces shows that
The map defined by for all satisfies the axioms of the inner product except that it is antilinear in its, rather than its second, argument. The real part of both and are equal to but the inner products differ in their complex part:
The last equality is similar to the formula expressing a linear functional in terms of its real part.
These formulas show that every complex inner product is completely determined by its real part. Moreover, this real part defines an inner product on considered as a real vector space. There is thus a one-to-one correspondence between complex inner products on a complex vector space and real inner products on
For example, suppose that for some integer When is considered as a real vector space in the usual way, then the dot product defines a real inner product on this space. The unique complex inner product on induced by the dot product is the map that sends to .
Real vs. complex inner products
Let denote considered as a vector space over the real numbers rather than complex numbers.The real part of the complex inner product is the map which necessarily forms a real inner product on the real vector space Every inner product on a real vector space is a bilinear and symmetric map.
For example, if with inner product where is a vector space over the field then is a vector space over and is the dot product where is identified with the point ; thus the standard inner product on is an "extension" the dot product. Also, had been instead defined to be the then its real part would be the dot product; furthermore, without the complex conjugate, if but then so the assignment would not define a norm.
The next examples show that although real and complex inner products have many properties and results in common, they are not entirely interchangeable.
For instance, if then but the next example shows that the converse is in general true.
Given any the vector belongs to and so also belongs to . For the complex inner product, whereas for the real inner product the value is always
If is a complex inner product and is a continuous linear operator that satisfies for all then This statement is no longer true if is instead a real inner product, as this next example shows.
Suppose that has the inner product mentioned above. Then the map defined by is a linear map that denotes rotation by in the plane. Because and are perpendicular vectors and is just the dot product, for all vectors nevertheless, this rotation map is certainly not identically In contrast, using the complex inner product gives which is not identically zero.
Orthonormal sequences
Let be a finite dimensional inner product space of dimension Recall that every basis of consists of exactly linearly independent vectors. Using the Gram–Schmidt process we may start with an arbitrary basis and transform it into an orthonormal basis. That is, into a basis in which all the elements are orthogonal and have unit norm. In symbols, a basis is orthonormal if for every and for each indexThis definition of orthonormal basis generalizes to the case of infinite-dimensional inner product spaces in the following way. Let be any inner product space. Then a collection
is a for if the subspace of generated by finite linear combinations of elements of is dense in . Say that is an for if it is a basis and
if and for all
Using an infinite-dimensional analog of the Gram-Schmidt process one may show:
Theorem. Any separable inner product space has an orthonormal basis.
Using the Hausdorff maximal principle and the fact that in a complete inner product space orthogonal projection onto linear subspaces is well-defined, one may also show that
Theorem. Any complete inner product space has an orthonormal basis.
The two previous theorems raise the question of whether all inner product spaces have an orthonormal basis. The answer, it turns out is negative. This is a non-trivial result, and is proved below. The following proof is taken from Halmos's A Hilbert Space Problem Book.
Let be a Hilbert space of dimension Aleph-null|. Let be an orthonormal basis of so Extend to a Hamel basis for where Since it is known that the Hamel dimension of is the cardinality of the continuum, it must be that
Let be a Hilbert space of dimension . Let be an orthonormal basis for and let be a bijection. Then there is a linear transformation such that for and for
Let and let be the graph of Let be the closure of in ; we will show Since for any we have it follows that
Next, if then for some so ; since as well, we also have It follows that so and is dense in
Finally, is a maximal orthonormal set in ; if
for all then so is the zero vector in Hence the dimension of is whereas it is clear that the dimension of is This completes the proof.
Parseval's identity leads immediately to the following theorem:
Theorem. Let be a separable inner product space and an orthonormal basis of Then the map
is an isometric linear map with a dense image.
This theorem can be regarded as an abstract form of Fourier series, in which an arbitrary orthonormal basis plays the role of the sequence of trigonometric polynomials. Note that the underlying index set can be taken to be any countable set. In particular, we obtain the following result in the theory of Fourier series:
Theorem. Let be the inner product space Then the sequence of continuous functions
is an orthonormal basis of the space with the inner product. The mapping
is an isometric linear map with dense image.
Orthogonality of the sequence follows immediately from the fact that if then
Normality of the sequence is by design, that is, the coefficients are so chosen so that the norm comes out to 1. Finally the fact that the sequence has a dense algebraic span, in the, follows from the fact that the sequence has a dense algebraic span, this time in the space of continuous periodic functions on with the uniform norm. This is the content of the Weierstrass theorem on the uniform density of trigonometric polynomials.