DensePose

A real-time approach for mapping all human pixels of 2D RGB images to a 3D surface-based model of the body.

Dense human pose estimation aims at mapping all human pixels of an RGB image to the 3D surface of the human body. DensePose-RCNN is implemented in the Detectron framework and is powered by Caffe2.

In this repository, we provide the code to train and evaluate DensePose-RCNN. We also provide notebooks to visualize the collected DensePose-COCO dataset and show the correspondences to the SMPL model.

Installation

Please find installation instructions for Caffe2 and DensePose in INSTALL.md, a document based on the Detectron installation instructions.

Inference-Training-Testing

After installation, please see GETTING_STARTED.md for examples of inference and training and testing.

Notebooks

Visualization of DensePose-COCO annotations:

See notebooks/DensePose-COCO-Visualize.ipynbto visualize the DensePose-COCO annotations on the images:


DensePose-COCO in 3D:

See notebooks/DensePose-COCO-on-SMPL.ipynb to localize the DensePose-COCO annotations on the 3D template (SMPL) model:


Visualize DensePose-RCNN Results:

See notebooks/DensePose-RCNN-Visualize-Results.ipynb to visualize the inferred DensePose-RCNN Results.


DensePose-RCNN Texture Transfer:

See notebooks/DensePose-RCNN-Texture-Transfer.ipynb to localize the DensePose-COCO annotations on the 3D template (SMPL) model:

GitHub