/ Machine Learning

StyleGAN2 Distillation for Feed-forward Image Manipulation

StyleGAN2 Distillation for Feed-forward Image Manipulation

stylegan2-distillation

TL;DR: Paired image-to-image translation, trained on synthetic data generated by StyleGAN2 outperforms existing approaches in image manipulation.

StyleGAN2 Distillation for Feed-forward Image Manipulation

Yuri Viazovetskyi*1, Vladimir Ivashkin*1,2, and Evgeny
Kashin*1

[1]Yandex, [2]Moscow Institute of Physics and Technology (* indicates equal
contribution).

Abstract: StyleGAN2 is a state-of-the-art network in generating
realistic images. Besides, it was explicitly trained to have disentangled directions in latent space, which allows efficient image manipulation by varying latent factors. Editing existing images requires embedding a given image into the latent space of StyleGAN2. Latent code optimization via backpropagation is commonly used for qualitative embedding of real world images, although it is prohibitively slow for many applications. We propose a way to distill a particular image manipulation of StyleGAN2 into image-to-image network trained in paired way. The resulting pipeline is an alternative to existing GANs, trained on unpaired data. We provide results of human faces’ transformation: gender swap, aging/rejuvenation, style transfer and image morphing. We show that the quality of generation using our method is comparable to StyleGAN2 backpropagation and current state-of-the-art methods in these particular tasks.

Additional material

Results

Gender swap

gender
Full-size

Aging

aging
Full-size

Style mixing

style_mixing
Full-size

License

The source code, pretrained models, and dataset will be available under
Creative Commons BY-NC 4.0 license by Yandex LLC. You can use, copy, tranform and build upon the
material for non-commercial purposes as long as you give appropriate credit by citing our paper, and indicate if changes were made.

Citation

@article{viazovetskyi2020stylegan2,
 title={StyleGAN2 Distillation for Feed-forward Image Manipulation},
 author={Yuri Viazovetskyi and Vladimir Ivashkin and Evgeny Kashin},
 journal={arXiv preprint arXiv:2003.03581},
 year={2020}
}

GitHub

Comments