Mode collapse
In machine learning, mode collapse is a failure mode observed in generative models, originally noted in Generative Adversarial Networks. It occurs when the model produces outputs that are less diverse than expected, effectively "collapsing" to generate only a few modes of the data distribution while ignoring others. This phenomenon undermines the goal of generative models to capture the full diversity of the training data.
There are typically two times at which a model can collapse: either during training or during post-training finetuning.
Mode collapse reduces the utility of generative models in applications, such as in
- image synthesis ;
- data augmentation ;
- scientific simulations.
Distinctions
In terms of learning a probability distribution, mode collapse corresponds to the collapse of the entire distribution to one or a few points, which may or may not correspond to points with high likelihood in the target distribution.
Overfitting, on the other hand, corresponds to learning a distribution that is highly peaked around training data points. In a sense, it can be seen as a form of near-complete or complete mode collapse, where the modes are every, or most of the training dataset. However, this is usually due to the overparametrization of the model, and not the training procedure itself, as is the case for GANs.
Underfitting, however, does not share commonalities with mode collapse. In this case, the model is insufficiently parametrized or trained, and the learned distribution is far from the target distribution, usually too close to the distribution at initialization.
In GANs
Training-time mode collapse was originally noted and studied in GANs, where it arises primarily due to imbalances in the training dynamics between the generator and discriminator in GANs. In the original GAN paper, it was also called the "Helvetica scenario".Common causes include:
- If the discriminator learns too slowly, the generator may exploit weaknesses by producing a narrow set of outputs that consistently fool the discriminator.
- Traditional GAN loss functions may be too lenient on generating same-looking outputs.
- The adversarial training process can lead to oscillatory behavior, where the generator and discriminator fail to converge to a stable equilibrium, but instead engage in a rock-beats-paper-beats-scissors kind of cycling. The generator would generate just "rock" until the discriminator learns to classify that as generated, then the generator switch to generating just "scissors", and so on. The generator would always be mode-collapsed, though the precise mode in which it collapses to would change during training.
- Two time-scale update rule.
- Mini-batch discrimination allows the discriminator to evaluate entire batches of samples, encouraging diversity.
- Unrolled GANs optimize the generator against future states of the discriminator.
- Wasserstein GAN uses Earth Mover's distance to provide more stable gradients.
- Use a big and balanced training dataset.
- Regularization methods such as gradient penalty and spectral normalization.
Finetuning
Mode collapse may occur during finetuning, as the model learns to generate text that accomplishes the specific task, but loses ability to generate other forms of text. It may also be able to generate a smaller subset of texts that accomplish the specific task. It is hypothesized that there is a tradeoff between quality and diversity. Given a single pretrained model, one may finetune it to perform a specific task. More finetuning would result in higher average task performance, but less diverse outputs. Less finetuning would result in lower average performance, but more diverse outputs. A similar tradeoff has been observed in image generation models and GAN-based text generators.
Similarly, mode collapse may occur during RLHF, via reward hacking the reward model or other mechanisms.