Occlusion Robust 3D face Reconstruction
Yeong-Joon Ju, Gun-Hee Lee, Jung-Ho Hong, and Seong-Whan Lee
Code for Occlusion Robust 3D Face Reconstruction in “Complete Face Recovery GAN: Unsupervised Joint Face Rotation and De-Occlusion from a Single-View Image (WACV 2022)”
We propose our novel two stage fine-tuning strategy for occlusion-robust 3D face reconstruction. The training method is split into two training stages due to the difficulty of initial training for extreme occlusions. We fine-tune the baseline with our newly created datasets in the first stage and with teacher-student learning method in the second stage.
Our baseline is Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set and we also referred this code. Note that we focus on alignments and colors for guidance of CFR-GAN in occluded facial images.
Python 3.7 or 3.8 can be used.
pip install -r requirements.txt
Install the Pytorch3D==0.2.5
Basel Face Model 2009 (BFM09) and Expression Basis (transferred from Facewarehouse by Guo et al.). The original BFM09 model does not handle expression variations so extra expression basis are needed.
- However, we made BFM_model_80.mat (Dimension of id coef and tex coef is 80). Download and move mmRegressor/BFM folder.
Prepare your own dataset for data augmentation. The datasets used in this paper can be downloaded in follows:
Except when the dataset has facial landmarks labels, you should predict facial landmarks. We recommend using 3DDFA v2. If you want to reduce error propagation of the facial alignment networks, prepend a flag to filename. (ex) “pred”+[filename])
In order to train occlusion-robust 3D face model, occluded face image datasets are essential, but they are absent. So, we create datasets by synthesizing the hand-shape mask.
python create_train_stage1.py --img_path [your image folder] --lmk_path [your landmarks folder] --save_path [path to save]
For first training stage, prepare
occluded (augmented images),
ori_img (original images),
landmarks (3D landmarks) folders or modify folder name in
**You must align images with align.py**
meta file format is:
[filename] left eye x left eye y right eye x right eye y nose x nose y left mouth x left mouth y ...
You can use MTCNN or RetinaFace
First Fine-tuning Stage:
Instead of skin mask, we use BiseNet, face parsing network. The codes and weights were modified and re-trained from this code.
- Download weights of face parsing networks to faceParsing folder.
- Download weights of baseline 3D networks to mmRegressor/network folder.
Train occlusion-robust 3D face model
To show logs
tensorboard --logdir=logs_stage1 --bind_all --reload_multifile True
Second Fine-tuning Stage:
- You can download MaskedFaceNet dataset in here.
- You can download FFHQ dataset in .
To show logs
tensorboard --logdir=logs_stage2 --bind_all --reload_multifile True
If you would like to evaluate your results, please refer