Realistic Full-Body Anonymization with Surface-Guided GANs

This is the official source code for the paper “Realistic Full-Body Anonymization with Surface-Guided GANs”.

[Arixv Paper]

Surface-guided GANs is an automatic full-body anonymization technique based on Generative Adversarial Networks.

The key idea of surface-guided GANs is to guide the generative model with dense pixel-to-surface information (based on continuous surface embeddings). This yields highly realistic anonymization result and allows for diverse anonymization.


  • Pytorch >= 1.9
  • Torchvision >= 0.11
  • Python >= 3.8
  • CUDA capable device for training. Training was done with 1-4 32GB V100 GPUs.


We recommend to setup and install pytorch with anaconda following the pytorch installation instructions.

  1. Clone repository: git clone
  2. Install using
pip install -e .

Test the model

The file can anonymize image paths, directories and videos. python --help prints the different options.

To anonymize, visualize and save an output image, you can write:

python3 configs/surface_guided/ coco_val2017_000000001000.jpg --visualize --save

The truncation value decides the “creativity” of the generator, which you can specify in the range (0, 1). Setting -t 1 will generate diverse anonymization between individuals in the image.
We recommend to set it to t=0.5 to tradeoff between quality and diversity.

python3 configs/surface_guided/ coco_val2017_000000001000.jpg --visualize --save -t 1

Pre-trained models

Current release includes a pre-trained model for ConfigE from the main paper.
More pre-trained models will be released later.

Train the model

Instructions to train and reproduce results from the paper will be released by January 14th 2022.


All code, except the stated below, is released under MIT License.

Code under has are provided with other licenses:


If you use this code for your research, please cite:

      title={Realistic Full-Body Anonymization with Surface-Guided GANs}, 
      author={Håkon Hukkelås and Morten Smebye and Rudolf Mester and Frank Lindseth},


View Github