Jaccard index
The Jaccard index is a statistic used for gauging the similarity and diversity of sample sets.
It is defined in general taking the ratio of two sizes, the intersection size divided by the union size, also called intersection over union.
It was developed by Grove Karl Gilbert in 1884 as his ratio of verification and now is often called the critical success index in meteorology. It was later developed independently by Paul Jaccard, originally giving the French name coefficient de communauté, and independently formulated again by Taffee Tadashi Tanimoto. Thus, it is also called Tanimoto index or Tanimoto coefficient in some fields.
Overview
The Jaccard index measures similarity between finite non-empty sample sets and is defined as the size of the intersection divided by the size of the union of the sample sets:Note that by design, If the sets and have no elements in common, their intersection is empty, so and therefore The other extreme is that the two sets are equal. In that case so then The Jaccard index is widely used in computer science, ecology, genomics and other sciences where binary or binarized data are used. Both the exact solution and approximation methods are available for hypothesis testing with the Jaccard index.
Jaccard similarity also applies to bags, i.e., multisets. This has a similar formula, but the symbols used represent bag intersection and bag sum. The maximum value is 1/2.
The Jaccard distance, which measures dissimilarity between sample sets, is complementary to the Jaccard index and is obtained by subtracting the Jaccard index from 1 or, equivalently, by dividing the difference of the sizes of the union and the intersection of two sets by the size of the union:
An alternative interpretation of the Jaccard distance is as the ratio of the size of the symmetric difference to the union.
Jaccard distance is commonly used to calculate an matrix for clustering and multidimensional scaling of n sample sets.
This distance is a metric on the collection of all finite sets.
There is also a version of the Jaccard distance for measures, including probability measures. If is a measure on a measurable space, then we define the Jaccard index by
and the Jaccard distance by
Care must be taken if or, since these formulas are not well defined in these cases.
The MinHash min-wise independent permutations locality sensitive hashing scheme may be used to efficiently compute an accurate estimate of the Jaccard similarity index of pairs of sets, where each set is represented by a constant-sized signature derived from the minimum values of a hash function.
Similarity of asymmetric binary attributes
Given two objects, A and B, each with n binary attributes, the Jaccard index is a useful measure of the overlap that A and B share with their attributes. Each attribute of A and B can either be 0 or 1. The total number of each combination of attributes for both A and B are specified as follows:| 0 | 1 | |
| 0 | ||
| 1 |
Each attribute must fall into one of these four categories, meaning that
The Jaccard similarity index, J, is given as
The Jaccard distance, dJ, is given as
Statistical inference can be made based on the Jaccard similarity index, and consequently related metrics. Given two sample sets A and B with n attributes, a statistical test can be conducted to see if an overlap is statistically significant. The exact solution is available, although computation can be costly as n increases. Estimation methods are available either by approximating a multinomial distribution or by bootstrapping.
Difference with the simple matching index (SMC)
When used for binary attributes, the Jaccard index is very similar to the simple matching coefficient. The main difference is that the SMC has the term in its numerator and denominator, whereas the Jaccard index does not. Thus, the SMC counts both mutual presences and mutual absence as matches and compares it to the total number of attributes in the universe, whereas the Jaccard index only counts mutual presence as matches and compares it to the number of attributes that have been chosen by at least one of the two sets.In market basket analysis, for example, the basket of two consumers who we wish to compare might only contain a small fraction of all the available products in the store, so the SMC will usually return very high values of similarities even when the baskets bear very little resemblance, thus making the Jaccard index a more appropriate measure of similarity in that context. For example, consider a supermarket with 1000 products and two customers. The basket of the first customer contains salt and pepper and the basket of the second contains salt and sugar. In this scenario, the similarity between the two baskets as measured by the Jaccard index would be 1/3, but the similarity becomes 0.998 using the SMC.
In other contexts, where 0 and 1 carry equivalent information, the SMC is a better measure of similarity. For example, vectors of demographic variables stored in dummy variables, such as gender, would be better compared with the SMC than with the Jaccard index since the impact of gender on similarity should be equal, independently of whether male is defined as a 0 and female as a 1 or the other way around. However, when we have symmetric dummy variables, one could replicate the behaviour of the SMC by splitting the dummies into two binary attributes, thus transforming them into asymmetric attributes, allowing the use of the Jaccard index without introducing any bias. The SMC remains, however, more computationally efficient in the case of symmetric dummy variables since it does not require adding extra dimensions.
Weighted Jaccard similarity and distance
If and are two vectors with all real, then their Jaccard similarity index is defined asand Jaccard distance
With even more generality, if and are two non-negative measurable functions on a measurable space with measure, then we can define
where and are pointwise operators. Then Jaccard distance is
Then, for example, for two measurable sets, we have where and are the characteristic functions of the corresponding set.
Probability Jaccard similarity and distance
The weighted Jaccard similarity described above generalizes the Jaccard Index to positive vectors, where a set corresponds to a binary vector given by the indicator function, i.e.. However, it does not generalize the Jaccard Index to probability distributions, where a set corresponds to a uniform probability distribution, i.e.It is always less if the sets differ in size. If, and then
Instead, a generalization that is continuous between probability distributions and their corresponding support sets is
which is called the "Probability" Jaccard. It has the following bounds against the Weighted Jaccard on probability vectors.
Here the upper bound is the Sørensen–Dice coefficient.
The corresponding distance,, is a metric over probability distributions, and a pseudo-metric over non-negative vectors.
The Probability Jaccard Index has a geometric interpretation as the area of an intersection of simplices. Every point on a unit -simplex corresponds to a probability distribution on elements, because the unit -simplex is the set of points in dimensions that sum to 1. To derive the Probability Jaccard Index geometrically, represent a probability distribution as the unit simplex divided into sub simplices according to the mass of each item. If you overlay two distributions represented in this way on top of each other, and intersect the simplices corresponding to each item, the area that remains is equal to the Probability Jaccard Index of the distributions.
Optimality of the Probability Jaccard Index
Consider the problem of constructing random variables such that they collide with each other as much as possible. That is, if and, we would like to construct and to maximize. If we look at just two distributions in isolation, the highest we can achieve is given by where is the Total Variation distance. However, suppose we weren't just concerned with maximizing that particular pair, suppose we would like to maximize the collision probability of any arbitrary pair. One could construct an infinite number of random variables one for each distribution, and seek to maximize for all pairs. In a fairly strong sense described below, the Probability Jaccard Index is an optimal way to align these random variables.For any sampling method and discrete distributions, if then for some where and, either or.
That is, no sampling method can achieve more collisions than on one pair without achieving fewer collisions than on another pair, where the reduced pair is more similar under than the increased pair. This theorem is true for the Jaccard Index of sets and the probability Jaccard, but not of the weighted Jaccard.
This theorem has a visual proof on three element distributions using the simplex representation.
Tanimoto similarity and distance
Various forms of functions described as Tanimoto similarity and Tanimoto distance occur in the literature and on the Internet. Most of these are synonyms for Jaccard similarity and Jaccard distance, but some are mathematically different. Many sources cite an IBM Technical Report as the seminal reference.In "A Computer Program for Classifying Plants", published in October 1960, a method of classification based on a similarity ratio, and a derived distance function, is given. It seems that this is the most authoritative source for the meaning of the terms "Tanimoto similarity" and "Tanimoto Distance". The similarity ratio is equivalent to Jaccard similarity, but the distance function is not the same as Jaccard distance.