Positive-definite kernel
In operator theory, a branch of mathematics, a positive-definite kernel is a generalization of a positive-definite function or a positive-definite matrix. It was first introduced by James Mercer in the early 20th century, in the context of solving integral operator equations. Since then, positive-definite functions and their various analogues and generalizations have arisen in diverse parts of mathematics. They occur naturally in Fourier analysis, probability theory, operator theory, complex function-theory, moment problems, integral equations, boundary-value problems for partial differential equations, machine learning, embedding problem, information theory, and other areas.
Definition
Let be a nonempty set, sometimes referred to as the index set. A symmetric function is called a positive-definite kernel on ifholds for all,.
In probability theory, a distinction is sometimes made between positive-definite kernels, for which equality in implies, and positive semi-definite kernels, which do not impose this condition. Note that this is equivalent to requiring that every finite matrix constructed by pairwise evaluation,, has either entirely positive or nonnegative eigenvalues.
In mathematical literature, kernels are usually complex-valued functions. That is, a complex-valued function is called a Hermitian kernel if and positive definite if for every finite set of points and any complex numbers,
where denotes the complex conjugate. In the rest of this article we assume real-valued functions, which is the common practice in applications of p.d. kernels.
Some general properties
- For a family of p.d. kernels
- * The conical sum is p.d., given
- * The product is p.d., given
- * The limit is p.d. if the limit exists.
- If is a sequence of sets, and a sequence of p.d. kernels, then both and are p.d. kernels on.
- Let. Then the restriction of to is also a p.d. kernel.
Examples of p.d. kernels
- Common examples of p.d. kernels defined on Euclidean space include:
- * Linear kernel:.
- * Polynomial kernel:.
- * Gaussian kernel :.
- * Laplacian kernel:.
- * Abel kernel:.
- * Kernel generating Sobolev spaces :, where is the Bessel function of the third kind.
- * Kernel generating Paley–Wiener space:.
- If is a Hilbert space, then its corresponding inner product is a p.d. kernel. Indeed, we have
- Kernels defined on and histograms: Histograms are frequently encountered in applications of real-life problems. Most observations are usually available under the form of nonnegative vectors of counts, which, if normalized, yield histograms of frequencies. It has been shown that the following family of squared metrics, respectively Jensen divergence, the -square, Total Variation, and two variations of the Hellinger distance:can be used to define p.d. kernels using the following formula
Examples of other kernels
History
Positive-definite kernels, as defined in, appeared first in 1909 in a paper on integral equations by James Mercer. Several other authors made use of this concept in the following two decades, but none of them explicitly used kernels, i.e. p.d. functions. Mercer’s work arose from Hilbert’s paper of 1904 on Fredholm integral equations of the second kind:In particular, Hilbert had shown that
where is a continuous real symmetric kernel, is continuous, is a complete system of orthonormal eigenfunctions, and ’s are the corresponding eigenvalues of. Hilbert defined a “definite” kernel as one for which the double integral
satisfies except for. The original object of Mercer’s paper was to characterize the kernels which are definite in the sense of Hilbert, but Mercer soon found that the class of such functions was too restrictive to characterize in terms of determinants. He therefore defined a continuous real symmetric kernel to be of positive type if for all real continuous functions on, and he proved that is a necessary and sufficient condition for a kernel to be of positive type. Mercer then proved that for any continuous p.d. kernel the expansion
holds absolutely and uniformly.
At about the same time W. H. Young, motivated by a different question in the theory of integral equations, showed that for continuous kernels condition is equivalent to for all.
E.H. Moore initiated the study of a very general kind of p.d. kernel. If is an abstract set, he calls functions defined on “positive Hermitian matrices” if they satisfy for all. Moore was interested in generalization of integral equations and showed that to each such there is a Hilbert space of functions such that, for each. This property is called the reproducing property of the kernel and turns out to have importance in the solution of boundary-value problems for elliptic partial differential equations.
Another line of development in which p.d. kernels played a large role was the theory of harmonics on homogeneous spaces as begun by E. Cartan in 1929, and continued by H. Weyl and S. Ito. The most comprehensive theory of p.d. kernels in homogeneous spaces is that of M. Krein which includes as special cases the work on p.d. functions and irreducible unitary representations of locally compact groups.
In probability theory, p.d. kernels arise as covariance kernels of stochastic processes.
Connection with reproducing kernel Hilbert spaces and feature maps
Positive-definite kernels provide a framework that encompasses some basic Hilbert space constructions. In the following we present a tight relationship between positive-definite kernels and two mathematical objects, namely reproducing Hilbert spaces and feature maps.Let be a set, a Hilbert space of functions, and the corresponding inner product on For any the evaluation functional is defined by.
We first define a reproducing kernel Hilbert space :
Definition: Space is called a reproducing kernel Hilbert space if the evaluation functionals are continuous.
Every RKHS has a special function associated to it, namely the reproducing kernel:
Definition: Reproducing kernel is a function such thatThe latter property is called the reproducing property.
- , and
- , for all and.
The following result shows equivalence between RKHS and reproducing kernels:
Now the connection between positive definite kernels and RKHS is given by the following theorem
Thus, given a positive-definite kernel, it is possible to build an associated RKHS with as a reproducing kernel.
As stated earlier, positive definite kernels can be constructed from inner products. This fact can be used to connect p.d. kernels with another interesting object that arises in machine learning applications, namely the feature map. Let be a Hilbert space, and the corresponding inner product. Any map is called a feature map. In this case we call the feature space. It is easy to see that every feature map defines a unique p.d. kernel by
Indeed, positive definiteness of follows from the p.d. property of the inner product. On the other hand, every p.d. kernel, and its corresponding RKHS, have many associated feature maps. For example: Let, and for all. Then, by the reproducing property.
This suggests a new look at p.d. kernels as inner products in appropriate Hilbert spaces, or in other words p.d. kernels can be viewed as similarity maps which quantify effectively how similar two points and are through the value. Moreover, through the equivalence of p.d. kernels and its corresponding RKHS, every feature map can be used to construct a RKHS.
Kernels and distances
Kernel methods are often compared to distance based methods such as nearest neighbors. In this section we discuss parallels between their two respective ingredients, namely kernels and distances.Here by a distance function between each pair of elements of some set, we mean a metric defined on that set, i.e. any nonnegative-valued function on which satisfies
- , and if and only if,
Definition: A symmetric function is called a negative definite kernel on if
holds for any and such that.
The parallel between n.d. kernels and distances is in the following: whenever a n.d. kernel vanishes on the set, and is zero only on this set, then its square root is a distance for. At the same time each distance does not correspond necessarily to a n.d. kernel. This is only true for Hilbertian distances, where distance is called Hilbertian if one can embed the metric space isometrically into some Hilbert space.
On the other hand, n.d. kernels can be identified with a subfamily of p.d. kernels known as infinitely divisible kernels. A nonnegative-valued kernel is said to be infinitely divisible if for every there exists a positive-definite kernel such that.
Another link is that a p.d. kernel induces a pseudometric, where the first constraint on the distance function is loosened to allow for. Given a positive-definite kernel, we can define a distance function as: