Mimicry
Mimicry is a lightweight PyTorch library aimed towards the reproducibility of GAN research.
Comparing GANs is often difficult - mild differences in implementations and evaluation methodologies can result in huge performance differences. Mimicry aims to resolve this by providing: (a) Standardized implementations of popular GANs that closely reproduce reported scores; (b) Baseline scores of GANs trained and evaluated under the same conditions; (c) A framework for researchers to focus on implementation of GANs without rewriting most of GAN training boilerplate code, with support for multiple GAN evaluation metrics.
We provide a model zoo and set of baselines to benchmark different GANs of the same model size trained under the same conditions, using multiple metrics. To ensure reproducibility, we verify scores of our implemented models against reported scores in literature.
Installation
The library can be installed with:
pip install torch-mimicry
See also setup information for more.
Example Usage
Training a popular GAN like SNGAN that reproduces reported scores can be done as simply as:
import torch
import torch.optim as optim
import torch_mimicry as mmc
from torch_mimicry.nets import sngan
# Data handling objects
device = torch.device('cuda:0' if torch.cuda.is_available() else "cpu")
dataset = mmc.datasets.load_dataset(root='./datasets', name='cifar10')
dataloader = torch.utils.data.DataLoader(
dataset, batch_size=64, shuffle=True, num_workers=4)
# Define models and optimizers
netG = sngan.SNGANGenerator32().to(device)
netD = sngan.SNGANDiscriminator32().to(device)
optD = optim.Adam(netD.parameters(), 2e-4, betas=(0.0, 0.9))
optG = optim.Adam(netG.parameters(), 2e-4, betas=(0.0, 0.9))
# Start training
trainer = mmc.training.Trainer(
netD=netD,
netG=netG,
optD=optD,
optG=optG,
n_dis=5,
num_steps=100000,
lr_decay='linear',
dataloader=dataloader,
log_dir='./log/example',
device=device)
trainer.train()
Example outputs:
>>> INFO: [Epoch 1/127][Global Step: 10/100000]
| D(G(z)): 0.5941
| D(x): 0.9303
| errD: 1.4052
| errG: -0.6671
| lr_D: 0.0002
| lr_G: 0.0002
| (0.4550 sec/idx)
^CINFO: Saving checkpoints from keyboard interrupt...
INFO: Training Ended
Tensorboard visualizations:
tensorboard --logdir=./log/example
See further details in example script, as well as a detailed tutorial on implementing a custom GAN from scratch.
Baselines | Model Zoo
For a fair comparison, we train all models under the same training conditions for each dataset, each implemented using ResNet backbones of the same architectural capacity. We train our models with the Adam optimizer using the popular hyperparameters (β1, β2) = (0.0, 0.9). ndis represents the number of discriminator update steps per generator update step, and niter is simply the number of training iterations.
Models
Abbrev. |
Name |
Type* |
DCGAN |
Deep Convolutional GAN |
Unconditional |
WGAN-GP |
Wasserstein GAN with Gradient Penalty |
Unconditional |
SNGAN |
Spectral Normalization GAN |
Unconditional |
cGAN-PD |
Conditional GAN with Projection Discriminator |
Conditional |
SSGAN |
Self-supervised GAN |
Unconditional |
InfoMax-GAN |
Infomax-GAN |
Unconditional |
*Conditional GAN scores are only reported for labelled datasets.
Metrics
*Inception Score can be a poor indicator of GAN performance, as it does not measure diversity and is not domain agnostic. This is why certain datasets with only a single class (e.g. CelebA and LSUN-Bedroom) will perform poorly when using this metric.
Datasets
Dataset |
Split |
Resolution |
CIFAR-10 |
Train |
32 x 32 |
CIFAR-100 |
Train |
32 x 32 |
ImageNet |
Train |
32 x 32 |
STL-10 |
Unlabeled |
48 x 48 |
CelebA |
All |
64 x 64 |
CelebA |
All |
128 x 128 |
LSUN-Bedroom |
Train |
128 x 128 |
CelebA
Paper | Dataset
Training Parameters
Resolution |
Batch Size |
Learning Rate |
β1 |
β2 |
Decay Policy |
ndis |
niter |
128 x 128 |
64 |
2e-4 |
0.0 |
0.9 |
None |
2 |
100K |
64 x 64 |
64 |
2e-4 |
0.0 |
0.9 |
Linear |
5 |
100K |
Results
LSUN-Bedroom
Paper | Dataset
Training Parameters
Resolution |
Batch Size |
Learning Rate |
β1 |
β2 |
Decay Policy |
ndis |
niter |
128 x 128 |
64 |
2e-4 |
0.0 |
0.9 |
Linear |
2 |
100K |
Results
STL-10
Paper | Dataset
Training Parameters
Resolution |
Batch Size |
Learning Rate |
β1 |
β2 |
Decay Policy |
ndis |
niter |
48 x 48 |
64 |
2e-4 |
0.0 |
0.9 |
Linear |
5 |
100K |
Results
ImageNet
Paper | Dataset
Training Parameters
Resolution |
Batch Size |
Learning Rate |
β1 |
β2 |
Decay Policy |
ndis |
niter |
32 x 32 |
64 |
2e-4 |
0.0 |
0.9 |
Linear |
5 |
100K |
Results
CIFAR-10
Paper | Dataset
Training Parameters
Resolution |
Batch Size |
Learning Rate |
β1 |
β2 |
Decay Policy |
ndis |
niter |
32 x 32 |
64 |
2e-4 |
0.0 |
0.9 |
Linear |
5 |
100K |
Results
CIFAR-100
Paper | Dataset
Training Parameters
Resolution |
Batch Size |
Learning Rate |
β1 |
β2 |
Decay Policy |
ndis |
niter |
32 x 32 |
64 |
2e-4 |
0.0 |
0.9 |
Linear |
5 |
100K |
Results
Reproducibility
To verify our implementations, we reproduce reported scores in literature by re-implementing the models with the same architecture, training them under the same conditions and evaluate them on CIFAR-10 using the exact same methodology for computing FID.
As FID produces highly biased estimates (where using larger samples lead to a lower score), we reproduce the scores using the same sample sizes, where nreal and nfake refers to the number of real and fake images used respectively for computing FID.
GitHub