0%

Progressive growing of GANs for improved quality

Progressive growing of GANs for improved quality, stability and variation

  • Category: Article
  • Created: February 14, 2022 10:01 AM
  • Status: Open
  • URL: https://arxiv.org/pdf/1710.10196.pdf
  • Updated: February 15, 2022 5:07 PM

Highlights

We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses.

Intuition

  1. The generation of high-resolution images is difficult because higher resolution makes it easier to tell the generated images apart from training images, thus drastically amplifying the gradient problem.
  2. Our key insight is that we can grow both the generator and discriminator progressively, starting from easier low-resolution images, and add new layers that introduce higher-resolution details as the training progresses. This greatly speeds up training and improves stability in high resolutions.

Methods

Our primary contribution is a training methodology for GANs where we start with low-resolution images, and then progressively increase the resolution by adding layers to the networks. This incremental nature allows the training to first discover large-scale structure of the image distribution and then shift attention to increasingly finer scale detail, instead of having to learn all scales simultaneously.

Screen Shot 2022-02-15 at 15.45.13.png

We use generator and discriminator networks that are mirror images of each other and always grow in synchrony. All existing layers in both networks remain trainable throughout the training process. When new layers are added to the networks, we fade them in smoothly. This avoids sudden shocks to the already well-trained, smaller-resolution layers.

Screen Shot 2022-02-15 at 15.45.25.png

We observe that the progressive training has several benefits.

  1. the generation of smaller images is substantially more stable because there is less class information and fewer modes.
  2. By increasing the resolution little by little we are continuously asking a much simpler question compared to the end goal of discovering a mapping from latent vectors.
  3. Another benefit is the reduced training time. With progressively growing GANs most of the iterations are done at lower resolutions, and comparable result quality is often obtained up to 2–6 times faster, depending on the final output resolution.

Increasing Variation Using Minibatch Standard Deviation

We first compute the standard deviation for each feature in each spatial location over the mini-batch. We then average these estimates over all features and spatial locations to arrive at a single value. We replicate the value and concatenate it to all spatial locations and over the mini-batch, yielding one additional (constant) feature map. This layer could be inserted anywhere in the discriminator, but we have found it best to insert it towards the end