Autoencoder
An autoencoder is a type of artificial neural network used to learn efficient codings of unlabeled data. An autoencoder learns two functions: an encoding function that transforms the input data, and a decoding function that recreates the input data from the encoded representation. The autoencoder learns an efficient representation for a set of data, typically for dimensionality reduction, to generate lower-dimensional embeddings for subsequent use by other machine learning algorithms.
Variants exist which aim to make the learned representations assume useful properties. Examples are regularized autoencoders, which are effective in learning representations for subsequent classification tasks, and variational autoencoders, which can be used as generative models. Autoencoders are applied to many problems, including facial recognition, feature detection, anomaly detection, and learning the meaning of words. In terms of data synthesis, autoencoders can also be used to randomly generate new data that is similar to the input data.
Mathematical principles
Definition
An autoencoder is defined by the following components:Two sets: the space of encoded messages ; the space of decoded messages. Typically and are Euclidean spaces, that is, with
Two parametrized families of functions: the encoder family, parametrized by ; the decoder family, parametrized by.For any, we usually write, and refer to it as the code, the latent variable, latent representation, latent vector, etc. Conversely, for any, we usually write, and refer to it as the message.
Usually, both the encoder and the decoder are defined as multilayer perceptrons. For example, a one-layer-MLP encoder is:
where is an element-wise activation function, is a "weight" matrix, and is a "bias" vector.
Training an autoencoder
An autoencoder, by itself, is simply a tuple of two functions. To judge its quality, we need a task. A task is defined by a reference probability distribution over, and a "reconstruction quality" function, such that measures how much differs from.With those, we can define the loss function for the autoencoder asThe optimal autoencoder for the given task is then. The search for the optimal autoencoder can be accomplished by any mathematical optimization technique, but usually by gradient descent. This search process is referred to as "training the autoencoder".
In most situations, the reference distribution is just the empirical distribution given by a dataset, so that
where is the Dirac measure, the quality function is just L2 loss:, and is the Euclidean norm. Then the problem of searching for the optimal autoencoder is just a least-squares optimization:
Interpretation
An autoencoder has two main parts: an encoder that maps the message to a code, and a decoder that reconstructs the message from the code. An optimal autoencoder would perform as close to perfect reconstruction as possible, with "close to perfect" defined by the reconstruction quality function.The simplest way to perform the copying task perfectly would be to duplicate the signal. To suppress this behavior, the code space usually has fewer dimensions than the message space.
Such an autoencoder is called undercomplete. It can be interpreted as compressing the message, or reducing its dimensionality.
At the limit of an ideal undercomplete autoencoder, every possible code in the code space is used to encode a message that really appears in the distribution, and the decoder is also perfect:. This ideal autoencoder can then be used to generate messages indistinguishable from real messages, by feeding its decoder arbitrary code and obtaining, which is a message that really appears in the distribution.
If the code space has dimension larger than, or equal to, the message space, or the hidden units are given enough capacity, an autoencoder can learn the identity function and become useless. However, experimental results found that overcomplete autoencoders might still learn useful features.
In the ideal setting, the code dimension and the model capacity could be set on the basis of the complexity of the data distribution to be modeled. A standard way to do so is to add modifications to the basic autoencoder, to be detailed below.
Variations
Variational autoencoder (VAE)
s belong to the families of variational Bayesian methods. Despite the architectural similarities with basic autoencoders, VAEs are architected with different goals and have a different mathematical formulation. The latent space is, in this case, composed of a mixture of distributions instead of fixed vectors.Given an input dataset characterized by an unknown probability function and a multivariate latent encoding vector, the objective is to model the data as a distribution, with defined as the set of the network parameters so that.
Sparse autoencoder (SAE)
Inspired by the sparse coding hypothesis in neuroscience, sparse autoencoders are variants of autoencoders, such that the codes for messages tend to be sparse codes, that is, is close to zero in most entries. Sparse autoencoders may include more hidden units than inputs, but only a small number of the hidden units are allowed to be active at the same time. Encouraging sparsity improves performance on classification tasks.There are two main ways to enforce sparsity. One way is to simply clamp all but the highest-k activations of the latent code to zero. This is the k-sparse autoencoder.
The k-sparse autoencoder inserts the following "k-sparse function" in the latent layer of a standard autoencoder:where if ranks in the top k, and 0 otherwise.
Backpropagating through is simple: set gradient to 0 for entries, and keep gradient for entries. This is essentially a generalized ReLU function.
The other way is a relaxed version of the k-sparse autoencoder. Instead of forcing sparsity, we add a sparsity regularization loss, then optimize forwhere measures how much sparsity we want to enforce.
Let the autoencoder architecture have layers. To define a sparsity regularization loss, we need a "desired" sparsity for each layer, a weight for how much to enforce each sparsity, and a function to measure how much two sparsities differ.
For each input, let the actual sparsity of activation in each layer bewhere is the activation in the -th neuron of the -th layer upon input.
The sparsity loss upon input for one layer is, and the sparsity regularization loss for the entire autoencoder is the expected weighted sum of sparsity losses:Typically, the function is either the Kullback-Leibler divergence, as
or the L1 loss, as, or the L2 loss, as.
Alternatively, the sparsity regularization loss may be defined without reference to any "desired sparsity", but simply force as much sparsity as possible. In this case, one can define the sparsity regularization loss as where is the activation vector in the -th layer of the autoencoder. The norm is usually the L1 norm or the L2 norm.
Denoising autoencoder (DAE)
Denoising autoencoders try to achieve a good representation by changing the reconstruction criterion.A DAE, originally called a "robust autoassociative network" by Mark A. Kramer, is trained by intentionally corrupting the inputs of a standard autoencoder during training. A noise process is defined by a probability distribution over functions. That is, the function takes a message, and corrupts it to a noisy version. The function is selected randomly, with a probability distribution.
Given a task, the problem of training a DAE is the optimization problem:That is, the optimal DAE should take any noisy message and attempt to recover the original message without noise, thus the name "denoising".
Usually, the noise process is applied only during training and testing, not during downstream use.
The use of DAE depends on two assumptions:
- There exist representations to the messages that are relatively stable and robust to the type of noise we are likely to encounter;
- The said representations capture structures in the input distribution that are useful for our purposes.
- additive isotropic Gaussian noise,
- masking noise
- salt-and-pepper noise.
Contractive autoencoder (CAE)
The DAE can be understood as an infinitesimal limit of CAE: in the limit of small Gaussian input noise, DAEs make the reconstruction function resist small but finite-sized input perturbations, while CAEs make the extracted features resist infinitesimal input perturbations.
Minimum description length autoencoder (MDL-AE)
A minimum description length autoencoder is an advanced variation of the traditional autoencoder, which leverages principles from information theory, specifically the Minimum Description Length principle. The MDL principle posits that the best model for a dataset is the one that provides the shortest combined encoding of the model and the data. In the context of autoencoders, this principle is applied to ensure that the learned representation is not only compact but also interpretable and efficient for reconstruction.The MDL-AE seeks to minimize the total description length of the data, which includes the size of the latent representation and the error in reconstructing the original data. The objective can be expressed as
, where represents the length of the compressed latent representation and denotes the reconstruction error.