0%

• Category: Article
• Created: January 25, 2022 10:27 AM
• Status: Open
• URL: https://arxiv.org/pdf/1411.1784.pdf
• Updated: February 15, 2022 6:53 PM

# Background

In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, $$y$$, we wish to condition on to both the generator and discriminator.

# Highlights

1. By conditioning the model on additional information it is possible to direct the data generation process.
2. Many interesting problems are more naturally thought of as a probabilistic one-to-many mapping. One way to address the problem is to use a conditional probabilistic generative model, the input is taken to be the conditioning variable and the one-to-many mapping is instantiated as a conditional predictive distribution.

# Methods

Generative adversarial nets can be extended to a conditional model if both the generator and discriminator are conditioned on some extra information $$y$$. $$y$$ could be any kind of auxiliary information, such as class labels or data from other modalities. We can perform the conditioning by feeding $$y$$ into the both the discriminator and generator as additional input layer.

### Generator

In the generator the prior input noise $$p_z(z)$$, and $$y$$ are combined in joint hidden representation, and the adversarial training framework allows for considerable flexibility in how this hidden representation is composed.

### Discriminator

In the discriminator $$x$$ and $$y$$ are presented as inputs and to a discriminative function (embodied again by a MLP in this case).

$\min _{G} \max _{D} V(D, G)=\mathbb{E}_{\boldsymbol{x} \sim p_{\mathrm{d}} \boldsymbol{d a}(\boldsymbol{x})}[\log D(\boldsymbol{x} \mid \boldsymbol{y})]+\mathbb{E}_{\boldsymbol{z} \sim p_{\boldsymbol{z}}(\boldsymbol{z})}[\log (1-D(G(\boldsymbol{z} \mid \boldsymbol{y})))]$