Model-based clustering


In statistics, cluster analysis is the algorithmic grouping of objects into homogeneous
groups based on numerical measurements. Model-based clustering based on a statistical model for the data, usually a mixture model. This has several advantages, including a principled statistical basis for clustering,
and ways to choose the number of clusters, to choose the best clustering model, to assess the uncertainty of the clustering, and to identify outliers that do not belong to any group.

Model-based clustering

Suppose that for each of observations we have data on
variables, denoted by
for observation. Then
model-based clustering expresses the probability density function of
as a finite mixture, or weighted average of
component probability density functions:
where is a probability density function with
parameter, is the corresponding
mixture probability where.
Then in its simplest form, model-based clustering views each component
of the mixture model as a cluster, estimates the model parameters, and assigns
each observation to cluster corresponding to its most likely mixture component.

Gaussian mixture model

The most common model for continuous data is that is a multivariate normal distribution with mean vector
and covariance matrix, so that
.
This defines a Gaussian mixture model. The parameters of the model,
and for,
are typically estimated by maximum likelihood estimation using the
expectation-maximization algorithm ; see also
EM algorithm and GMM model.
Bayesian inference is also often used for inference about finite
mixture models. The Bayesian approach also allows for the case where the number of components,, is infinite, using a Dirichlet process prior, yielding a Dirichlet process mixture model for clustering.

Choosing the number of clusters

An advantage of model-based clustering is that it provides statistically
principled ways to choose the number of clusters. Each different choice of the number of groups corresponds to a different mixture model. Then standard statistical model selection criteria such as the
Bayesian information criterion can be used to choose. The integrated completed likelihood is a different criterion designed to choose the number of clusters rather than the number of mixture components in the model; these will often be different if highly non-Gaussian clusters are present.

Parsimonious Gaussian mixture model

For data with high dimension,, using a full covariance matrix for each mixture component requires estimation of many parameters, which can result in a loss of precision, generalizabity and interpretability. Thus it is common to use more parsimonious component covariance matrices exploiting their geometric interpretation. Gaussian clusters are ellipsoidal, with their volume, shape and orientation determined by the covariance matrix. Consider the eigendecomposition of a matrix
where is the matrix of eigenvectors of
,
is a diagonal matrix whose elements are proportional to
the eigenvalues of in descending order,
and is the associated constant of proportionality.
Then controls the volume of the ellipsoid,
its shape, and its orientation.
Each of the volume, shape and orientation of the clusters can be
constrained to be equal or allowed to vary ; the orientation can
also be spherical, with identical eigenvalues. This yields 14 possible clustering models, shown in this table:
ModelDescription# Parameters
EIISpherical, equal volume1
VIISpherical, varying volume9
EEIDiagonal, equal volume & shape4
VEIDiagonal, equal shape12
EVIDiagonal, equal volume, varying shape28
VVIDiagonal, varying volume & shape36
EEEEqual10
VEEEqual shape & orientation18
EVEEqual volume & orientation34
VVEEqual orientation42
EEVEqual volume & shape58
VEVEqual shape66
EVVEqual volume82
VVVVarying90

It can be seen that many of these models are more parsimonious, with far fewer
parameters than the unconstrained model that has 90 parameters when
and.
Several of these models correspond to well-known heuristic clustering methods.
For example, k-means clustering is equivalent to estimation of the
EII clustering model using the classification EM algorithm. The Bayesian information criterion
can be used to choose the best clustering model as well as the number of clusters. It can also be used as the basis for a method to choose the variables
in the clustering model, eliminating variables that are not useful for clustering.
Different Gaussian model-based clustering methods have been developed with
an eye to handling high-dimensional data. These include the pgmm method, which is based on the mixture of
factor analyzers model, and the HDclassif method, based on the idea of subspace clustering.
The mixture-of-experts framework extends model-based clustering to include covariates.

Example

We illustrate the method with a dateset consisting of three measurements
on 145 subjects for the purpose of diagnosing
diabetes and the type of diabetes present.
The subjects were clinically classified into three groups: normal,
chemical diabetes and overt diabetes, but we use this information only
for evaluating clustering methods, not for classifying subjects.
The BIC plot shows the BIC values for each combination of the number of
clusters,, and the clustering model from the Table.
Each curve corresponds to a different clustering model.
The BIC favors 3 groups, which corresponds to the clinical assessment.
It also favors the unconstrained covariance model, VVV.
This fits the data well, because the normal patients have low values of
both sspg and insulin, while the distributions of the chemical and
overt diabetes groups are elongated, but in different directions.
Thus the volumes, shapes and orientations of the three groups are clearly
different, and so the unconstrained model is appropriate, as selected
by the model-based clustering method.
The classification plot shows the classification of the subjects by model-based
clustering. The classification was quite accurate, with a 12% error rate
as defined by the clinical classification.
Other well-known clustering methods performed worse with higher
error rates, such as single-linkage clustering with 46%,
average link clustering with 30%, complete-linkage clustering
also with 30%, and k-means clustering with 28%.

Outliers in clustering

An outlier in clustering is a data point that does not belong to any of
the clusters. One way of modeling outliers in model-based clustering is
to include an additional mixture component that is very dispersed, with
for example a uniform distribution. Another approach is to replace the multivariate
normal densities by -distributions, with the idea that the long tails of the
-distribution would ensure robustness to outliers.
However, this is not breakdown-robust.
A third approach is the "tclust" or data trimming approach
which excludes observations identified as
outliers when estimating the model parameters.

Non-Gaussian clusters and merging

Sometimes one or more clusters deviate strongly from the Gaussian assumption.
If a Gaussian mixture is fitted to such data, a strongly non-Gaussian
cluster will often be represented by several mixture components rather than
a single one. In that case, cluster merging can be used to find a better
clustering. A different approach is to use mixtures
of complex component densities to represent non-Gaussian clusters.

Non-continuous data

Categorical data

Clustering multivariate categorical data is most often done using the
latent class model. This assumes that the data arise from a finite
mixture model, where within each cluster the variables are independent.

Mixed data

These arise when variables are of different types, such
as continuous, categorical or ordinal data. A latent class model for
mixed data assumes local independence between the variable. The location model relaxes the local independence
assumption. The clustMD approach assumes that
the observed variables are manifestations of underlying continuous Gaussian
latent variables.

Count data

The simplest model-based clustering approach for multivariate
count data is based on finite mixtures with locally independent Poisson
distributions, similar to the latent class model.
More realistic approaches allow for dependence and overdispersion in the
counts.
These include methods based on the multivariate Poisson distribution,
the multivarate Poisson-log normal distribution, the integer-valued
autoregressive model and the Gaussian Cox model.

Sequence data

These consist of sequences of categorical values from a finite set of
possibilities, such as life course trajectories.
Model-based clustering approaches include group-based trajectory and
growth mixture models and a distance-based
mixture model.

Rank data

These arise when individuals rank objects in order of preference. The data
are then ordered lists of objects, arising in voting, education, marketing
and other areas. Model-based clustering methods for rank data include
mixtures of Plackett-Luce models and mixtures of Benter models,
and mixtures of Mallows models.