### Generating human faces with Adversarial Networks

This time we’ll train a neural net to generate plausible human faces in all their subtlty: appearance, expression, accessories, etc. ‘Cuz when us machines gonna take over Earth, there won’t be any more faces left. We want to preserve this data for future iterations. Yikes…

HBox(children=(IntProgress(value=0, max=13233), HTML(value='')))

<matplotlib.image.AxesImage at 0x11dfdf358>


# Generative adversarial nets 101

Deep learning is simple, isn’t it?

• build some network that generates the face (small image)
• make up a measure of how good that face is
• optimize with gradient descent :)

The only problem is: how can we engineers tell well-generated faces from bad? And i bet you we won’t ask a designer for help.

If we can’t tell good faces from bad, we delegate it to yet another neural network!

That makes the two of them:

• Generator - takes random noize for inspiration and tries to generate a face sample.
• Let’s call him G(z), where z is a gaussian noize.
• Discriminator - takes a face sample and tries to tell if it’s great or fake.
• Predicts the probability of input image being a real face
• Let’s call him D(x), x being an image.
• D(x) is a predition for real image and D(G(z)) is prediction for the face made by generator.

Before we dive into training them, let’s construct the two networks.

Using TensorFlow backend.

_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
noise (InputLayer)           (None, 256)               0
_________________________________________________________________
dense_1 (Dense)              (None, 640)               164480
_________________________________________________________________
reshape_1 (Reshape)          (None, 8, 8, 10)          0
_________________________________________________________________
conv2d_transpose_1 (Conv2DTr (None, 12, 12, 64)        16064
_________________________________________________________________
conv2d_transpose_2 (Conv2DTr (None, 16, 16, 64)        102464
_________________________________________________________________
up_sampling2d_1 (UpSampling2 (None, 32, 32, 64)        0
_________________________________________________________________
conv2d_transpose_3 (Conv2DTr (None, 34, 34, 32)        18464
_________________________________________________________________
conv2d_transpose_4 (Conv2DTr (None, 36, 36, 32)        9248
_________________________________________________________________
conv2d_transpose_5 (Conv2DTr (None, 38, 38, 32)        9248
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 36, 36, 3)         867
=================================================================
Total params: 320,835
Trainable params: 320,835
Non-trainable params: 0
_________________________________________________________________


### Discriminator

• Discriminator is your usual convolutional network with interlooping convolution and pooling layers
• The network does not include dropout/batchnorm to avoid learning complications.
• We also regularize the pre-output layer to prevent discriminator from being too certain.

# Training

We train the two networks concurrently:

• Train discriminator to better distinguish real data from current generator
• Train generator to make discriminator think generator is real
• Since discriminator is a differentiable neural network, we train both with gradient descent.

Training is done iteratively until discriminator is no longer able to find the difference (or until you run out of patience).

### Tricks:

• Regularize discriminator output weights to prevent explosion
• Train generator with adam to speed up training. Discriminator trains with SGD to avoid problems with momentum.
• More: https://github.com/soumith/ganhacks

### Auxiliary functions

Here we define a few helper functions that draw current data distributions and sample training batches.

### Training

Main loop.
We just train generator and discriminator in a loop and plot results once every N iterations.

Donate article here