Mixture of experts
Mixture of experts is a machine learning technique where multiple expert networks are used to divide a problem space into homogeneous regions. MoE represents a form of ensemble learning. They were also called committee machines.
Basic theory
MoE always has the following components, but they are implemented and combined differently according to the problem being solved:- Experts, each taking the same input, and producing outputs.
- A weighting function , which takes input and produces a vector of outputs. This may or may not be a probability distribution, but in both cases, its entries are non-negative.
- is the set of parameters. The parameter is for the weighting function. The parameters are for the experts.
- Given an input, the mixture of experts produces a single output by combining according to the weights in some way, usually by.
Meta-pi network
The meta-pi network, reported by Hampshire and Waibel, uses as the output. The model is trained by performing gradient descent on the mean-squared error loss. The experts may be arbitrary functions.In their original publication, they were solving the problem of classifying phonemes in speech signal from 6 different Japanese speakers, 2 females and 4 males. They trained 6 experts, each being a "time-delayed neural network". They found that the resulting mixture of experts dedicated 5 experts for 5 of the speakers, but the 6th speaker does not have a dedicated expert, instead his voice was classified by a linear combination of the experts for the other 3 male speakers.
Adaptive mixtures of local experts
The adaptive mixtures of local experts uses a Gaussian mixture model. Each expert simply predicts a Gaussian distribution, and totally ignores the input. Specifically, the -th expert predicts that the output is, where is a learnable parameter. The weighting function is a linear-softmax function:The mixture of experts predict that the output is distributed according to the log-probability density function:It is trained by maximal likelihood estimation, that is, gradient ascent on. The gradient for the -th expert isand the gradient for the weighting function is
For each input-output pair, the weighting function is changed to increase the weight on all experts that performed above average, and decrease the weight on all experts that performed below average. This encourages the weighting function to learn to select only the experts that make the right predictions for each input.
The -th expert is changed to make its prediction closer to, but the amount of change is proportional to. This has a Bayesian interpretation. Given input, the prior probability that expert is the right one is, and is the likelihood of evidence. So, is the posterior probability for expert, and so the rate of change for the -th expert is proportional to its posterior probability.
In words, the experts that, in hindsight, seemed like the good experts to consult, are asked to learn on the example. The experts that, in hindsight, were not, are left alone.
The combined effect is that the experts become specialized: Suppose two experts are both good at predicting a certain kind of input, but one is slightly better, then the weighting function would eventually learn to favor the better one. After that happens, the lesser expert is unable to obtain a high gradient signal, and becomes even worse at predicting such kind of input. Conversely, the lesser expert can become better at predicting other kinds of input, and increasingly pulled away into another region. This has a positive feedback effect, causing each expert to move apart from the rest and take care of a local region alone.
Hierarchical MoE
Hierarchical mixtures of experts uses multiple levels of gating in a tree. Each gating is a probability distribution over the next level of gatings, and the experts are on the leaf nodes of the tree. They are similar to decision trees.For example, a 2-level hierarchical MoE would have a first order gating function, and second order gating functions and experts. The total prediction is then.
Variants
The mixture of experts, being similar to the gaussian mixture model, can also be trained by the expectation-maximization algorithm, just like gaussian mixture models. Specifically, during the expectation step, the "burden" for explaining each data point is assigned over the experts, and during the maximization step, the experts are trained to improve the explanations they got a high burden for, while the gate is trained to improve its burden assignment. This can converge faster than gradient ascent on the log-likelihood.The choice of gating function is often softmax. Other than that, gating may use gaussian distributions and exponential families.
Instead of performing a weighted sum of all the experts, in hard MoE, only the highest ranked expert is chosen. That is,. This can accelerate training and inference time.
The experts can use more general forms of multivariant gaussian distributions. For example, proposed, where are learnable parameters. In words, each expert learns to do linear regression, with a learnable uncertainty estimate.
One can use different experts than gaussian distributions. For example, one can use Laplace distribution, or Student's t-distribution. For binary classification, it also proposed logistic regression experts, withwhere are learnable parameters. This is later generalized for multi-class classification, with multinomial logistic regression experts.
One paper proposed mixture of softmaxes for autoregressive language modelling. Specifically, consider a language model that given a previous text, predicts the next word. The network encodes the text into a vector, and predicts the probability distribution of the next word as for an embedding matrix. In mixture of softmaxes, the model outputs multiple vectors, and predict the next word as, where is a probability distribution by a linear-softmax operation on the activations of the hidden neurons within the model. The original paper demonstrated its effectiveness for recurrent neural networks. This was later found to work for Transformers as well.
Deep learning
The previous section described MoE as it was used before the era of deep learning. After deep learning, MoE found applications in running the largest models, as a simple way to perform conditional computation: only parts of the model are used, the parts chosen according to what the input is.The earliest paper that applies MoE to deep learning dates back to 2013, which proposed to use a different gating network at each layer in a deep neural network. Specifically, each gating is a linear-ReLU-linear-softmax network, and each expert is a linear-ReLU network. Since the output from the gating is not sparse, all expert outputs are needed, and no conditional computation is performed.
The key goal when using MoE in deep learning is to reduce computing cost. Consequently, for each query, only a small subset of the experts should be queried. This makes MoE in deep learning different from classical MoE. In classical MoE, the output for each query is a weighted sum of all experts' outputs. In deep learning MoE, the output for each query can only involve a few experts' outputs. Consequently, the key design choice in MoE becomes routing: given a batch of queries, how to route the queries to the best experts.
Sparsely-gated MoE layer
The sparsely-gated MoE layer, published by researchers from Google Brain, uses feedforward networks as experts, and linear-softmax gating. Similar to the previously proposed hard MoE, they achieve sparsity by a weighted sum of only the top-k experts, instead of the weighted sum of all of them. Specifically, in a MoE layer, there are feedforward networks, and a gating network. The gating network is defined by, where is a function that keeps the top-k entries of a vector the same, but sets all other entries to. The addition of noise helps with load balancing.The choice of is a hyperparameter that is chosen according to application. Typical values are. The version is also called the Switch Transformer. The original Switch Transformer was applied to a T5 language model.
As demonstration, they trained a series of models for machine translation with alternating layers of MoE and LSTM, and compared with deep LSTM models. Table 3 shows that the MoE models used less inference time compute, despite having 30x more parameters.
Load balancing
Vanilla MoE tend to have issues of load balancing: some experts are consulted often, while other experts rarely or not at all. To encourage the gate to select each expert with equal frequency within each batch, each MoE layer has two auxiliary loss functions. This is improved by Switch Transformer into a single auxiliary loss function. Specifically, let be the number of experts, then for a given batch of queries, the auxiliary loss for the batch isHere, is the fraction of tokens that chose expert, and is the fraction of weight on expert. This loss is minimized at, precisely when every expert has equal weight in all situations.Researchers at DeepSeek designed a variant of MoE, with "shared experts" that are always queried, and "routed experts" that might not be. They found that standard load balancing encourages the experts to be equally consulted, but this then causes experts to replicate the same core capacity, such as English grammar. They proposed the shared experts to learn core capacities that are often used, and let the routed experts to learn the peripheral capacities that are rarely used.They also proposed "auxiliary-loss-free load balancing strategy", which does not use auxiliary loss. Instead, each expert has an extra "expert bias". If an expert is being neglected, then their bias increases, and vice versa. During token assignment, each token picks the top-k experts, but with the bias added in. That is:Note that the expert bias matters for picking the experts, but not in adding up the responses from the experts.