# Wasserstein GAN with Gradient Penalty (WGAN-GP)

### Goals

In this notebook, you’re going to build a Wasserstein GAN with Gradient Penalty (WGAN-GP) that solves some of the stability issues with the GANs that you have been using up until this point. Specifically, you’ll use a special kind of loss function known as the W-loss, where W stands for Wasserstein, and gradient penalties to prevent mode collapse.

Fun Fact: Wasserstein is named after a mathematician at Penn State, Leonid Vaseršteĭn. You’ll see it abbreviated to W (e.g. WGAN, W-loss, W-distance).

### Learning Objectives

1. Get hands-on experience building a more stable GAN: Wasserstein GAN with Gradient Penalty (WGAN-GP).
2. Train the more advanced WGAN-GP model.

## Generator and Critic

You will begin by importing some useful packages, defining visualization functions, building the generator, and building the critic. Since the changes for WGAN-GP are done to the loss function during training, you can simply reuse your previous GAN code for the generator and critic class. Remember that in WGAN-GP, you no longer use a discriminator that classifies fake and real as 0 and 1 but rather a critic that scores images with real numbers.

## Training Initializations

Now you can start putting it all together.
As usual, you will start by setting the parameters:

• n_epochs: the number of times you iterate through the entire dataset when training
• z_dim: the dimension of the noise vector
• display_step: how often to display/visualize the images
• batch_size: the number of images per forward/backward pass
• lr: the learning rate
• beta_1, beta_2: the momentum terms
• c_lambda: weight of the gradient penalty
• crit_repeats: number of times to update the critic per generator update - there are more details about this in the Putting It All Together section
• device: the device type

You will also load and transform the MNIST dataset to tensors.

Then, you can initialize your generator, critic, and optimizers.

Calculating the gradient penalty can be broken into two functions: (1) compute the gradient with respect to the images and (2) compute the gradient penalty given the gradient.

You can start by getting the gradient. The gradient is computed by first creating a mixed image. This is done by weighing the fake and real image using epsilon and then adding them together. Once you have the intermediate image, you can get the critic’s output on the image. Finally, you compute the gradient of the critic score’s on the mixed images (output) with respect to the pixels of the mixed images (input). You will need to fill in the code to get the gradient wherever you see None. There is a test function in the next block for you to test your solution.

Success!


The second function you need to complete is to compute the gradient penalty given the gradient. First, you calculate the magnitude of each image’s gradient. The magnitude of a gradient is also called the norm. Then, you calculate the penalty by squaring the distance between each magnitude and the ideal norm of 1 and taking the mean of all the squared distances.

Again, you will need to fill in the code wherever you see None. There are hints below that you can view if you need help and there is a test function in the next block for you to test your solution.

Optional hints for gradient_penalty 1. Make sure you take the mean at the end. 2. Note that the magnitude of each gradient has already been calculated for you.
Success!


## Losses

Next, you need to calculate the loss for the generator and the critic.

For the generator, the loss is calculated by maximizing the critic’s prediction on the generator’s fake images. The argument has the scores for all fake images in the batch, but you will use the mean of them.

There are optional hints below and a test function in the next block for you to test your solution.

Optional hints for get_gen_loss 1. This can be written in one line. 2. This is the negative of the mean of the critic's scores.
Success!


For the critic, the loss is calculated by maximizing the distance between the critic’s predictions on the real images and the predictions on the fake images while also adding a gradient penalty. The gradient penalty is weighed according to lambda. The arguments are the scores for all the images in the batch, and you will use the mean of them.

There are hints below if you get stuck and a test function in the next block for you to test your solution.

Optional hints for get_crit_loss 1. The higher the mean fake score, the higher the critic's loss is. 2. What does this suggest about the mean real score? 3. The higher the gradient penalty, the higher the critic's loss is, proportional to lambda.
Success!


## Putting It All Together

Before you put everything together, there are a few things to note.

1. Even on GPU, the training will run more slowly than previous labs because the gradient penalty requires you to compute the gradient of a gradient — this means potentially a few minutes per epoch! For best results, run this for as long as you can while on GPU.
2. One important difference from earlier versions is that you will update the critic multiple times every time you update the generator This helps prevent the generator from overpowering the critic. Sometimes, you might see the reverse, with the generator updated more times than the critic. This depends on architectural (e.g. the depth and width of the network) and algorithmic choices (e.g. which loss you’re using).
3. WGAN-GP isn’t necessarily meant to improve overall performance of a GAN, but just increases stability and avoids mode collapse. In general, a WGAN will be able to train in a much more stable way than the vanilla DCGAN from last assignment, though it will generally run a bit slower. You should also be able to train your model for more epochs without it collapsing.

Here is a snapshot of what your WGAN-GP outputs should resemble:

Donate article here