TCMR

This repository is the official Pytorch implementation of Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video. The base codes are largely borrowed from VIBE. Find more qualitative results here.

Qualtitative result Paper teaser video
son teaser--1-

Installation

TCMR is tested on Ubuntu 16.04 with Pytorch 1.4 and Python 3.7.10.
You may need sudo privilege for the installation.

source scripts/install_pip.sh

If you have a problem related to torchgeometry, please check this out.

Quick demo

  • Download the pre-trained demo TCMR and required data by below command and download SMPL layers from here (male&female) and here (neutral). Put SMPL layers (pkl files) under ${ROOT}/data/base_data/.
source scripts/get_base_data.sh
  • Run demo with options (e.g. render on plain background). See more option details in bottom lines of demo.py.
  • A video overlayed with rendered meshes will be saved in ${ROOT}/output/demo_output/.
python demo.py --vid_file demo.mp4 --gpu 0 

Results

Here I report the performance of TCMR.

table4

table6

See our paper for more details.

Running TCMR

Download pre-processed data (except InstaVariety dataset) from here.
You may also download datasets from sources and pre-process yourself. Refer to this.
Put SMPL layers (pkl files) under ${ROOT}/data/base_data/.

The data directory structure should follow the below hierarchy.

${ROOT}  
|-- data  
|   |-- base_data  
|   |-- preprocessed_data  
|   |-- pretrained_models

Evaluation

  • Download pre-trained TCMR weights from here.
  • Run the evaluation code with a corresponding config file to reproduce the performance in the tables of our paper.
# dataset: 3dpw, mpii3d, h36m 
python evaluate.py --dataset 3dpw --cfg ./configs/repr_table4_3dpw_model.yaml --gpu 0 
  • You may test options such as average filtering and rendering. See the bottom lines of ${ROOT}/lib/core/config.py.
  • We checked rendering results of TCMR on 3DPW validation and test sets.

Reproduction (Training)

  • Run the training code with a corresponding config file to reproduce the performance in the tables of our paper.
  • There is a hard coding related to the config file's name. Please use the exact config file to reproduce, instead of changing the content of the default config file.
# training outputs are saved in `experiments` directory
# mkdir experiments
python train.py --cfg ./configs/repr_table4_3dpw_model.yaml --gpu 0 
  • After the training, the checkpoints are saved in ${ROOT}/experiments/{date_of_training}/. Change the config file's TRAIN.PRETRAINED with the checkpoint path (either checkpoint.pth.tar or model_best.pth.tar) and follow the evaluation command.
  • You may test the motion discriminator introduced in VIBE by uncommenting the codes that have exclude motion discriminator notations.
  • We do not release NeuralAnnot SMPL annotations of Human36M used in our paper yet. Thus the performance in Table 6 may be slightly different with the paper.

Reference

@InProceedings{choi2020beyond,
  title={Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video},
  author={Choi, Hongsuk and Moon, Gyeongsik and Lee, Kyoung Mu},
  booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)}
  year={2021}
}

GitHub

https://github.com/hongsukchoi/TCMR_RELEASE