EigenGAN

TensorFlow implementation of EigenGAN: Layer-Wise Eigen-Learning for GANs

Gender Bangs Body Side Pose (Yaw)
celeba_3-1_gender celeba_3-4_bangs celeba_3-6_body_side celeba_4-1_pose
Lighting Smile Face Shape Lipstick Color
celeba_4-4_lighting celeba_4-5_smiling celeba_4-6_face_shape celeba_5-6_lipstick_color
Painting Style Pose (Yaw) Pose (Pitch) Zoom & Rotate
2-5_painting_style 3-2_pose 3-5_pose 3-3_zoom-rotate
Flush & Eye Color Mouth Shape Hair Color Hue (Orange-Blue)
4-1_flush-eye_color 4-2_mouth_shape 5-1_hair_color 6-1_hue

EigenGAN

Usage

  • Environment
    • Python 3.6

    • TensorFlow 1.15

    • OpenCV, scikit-image, tqdm, oyaml

    • we recommend Anaconda or Miniconda, then you can create the environment with commands below

      conda create -n EigenGAN python=3.6
      
      source activate EigenGAN
      
      conda install opencv scikit-image tqdm tensorflow-gpu=1.15
      
      conda install -c conda-forge oyaml
      
    • NOTICE: if you create a new conda environment, remember to activate it before any other command

      source activate EigenGAN
      
  • Data Preparation
    • CelebA-unaligned (10.2GB, higher quality than the aligned data)
      • download the dataset

      • unzip and process the data

        7z x ./data/img_celeba/img_celeba.7z/img_celeba.7z.001 -o./data/img_celeba/
        
        unzip ./data/img_celeba/annotations.zip -d ./data/img_celeba/
        
        python ./scripts/align.py
        
    • Anime
      • download the dataset

        mkdir -p ./data/anime
        
        rsync --verbose --recursive rsync://78.46.86.149:873/biggan/portraits/ ./data/anime/original_imgs
        
      • process the data

        python ./scripts/remove_black_edge.py
        
  • Run (support multi-GPU)
    • training on CelebA

      CUDA_VISIBLE_DEVICES=0,1 \
      python train.py \
      --img_dir ./data/img_celeba/aligned/align_size(572,572)_move(0.250,0.000)_face_factor(0.450)_jpg/data \
      --experiment_name CelebA
      
    • training on Anime

      CUDA_VISIBLE_DEVICES=0,1 \
      python train.py \
      --img_dir ./data/anime/remove_black_edge_imgs \
      --experiment_name Anime
      
    • testing

      CUDA_VISIBLE_DEVICES=0 \
      python test_traversal_all_dims.py \
      --experiment_name CelebA
      
    • loss visualization

      CUDA_VISIBLE_DEVICES='' \
      tensorboard \
      --logdir ./output/CelebA/summaries \
      --port 6006
      
  • Using Trained Weights
    • trained weights (move to ./output/*.zip)

    • unzip the file (CelebA.zip for example)

      unzip ./output/CelebA.zip -d ./output/
      
    • testing (see above)

Citation

If you find EigenGAN useful in your research works, please consider citing:

@article{he2021eigengan,
  title={EigenGAN: Layer-Wise Eigen-Learning for GANs},
  author={He, Zhenliang and Kan, Meina and Shan, Shiguang},
  journal={arXiv:2104.12476},
  year={2021}
}

GitHub

https://github.com/LynnHo/EigenGAN-Tensorflow