MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation

This repo is the official implementation of “MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation, Wenhao Li, Hong Liu, Hao Tang, Pichao Wang, Luc Van Gool” in PyTorch.


  • Cuda 11.1
  • Python 3.6
  • Pytorch 1.7.1

Dataset setup

Please download the dataset from Human3.6m website and refer to VideoPose3D to set up the Human3.6M dataset (‘./dataset’ directory).

|-- dataset
|   |-- data_3d_h36m.npz
|   |-- data_2d_h36m_cpn_ft_h36m_dbb.npz

Download pretrained model

The pretrained model can be found in Google_Drive, please download it and put in the ‘./checkpoint’ dictory.

Test the model

To test on pretrained model on Human3.6M:

python --reload --previous_dir 'checkpoint/pretrained'

Here, we compare our MHFormer with recent state-of-the-art methods on Human3.6M dataset. Evaluation metric is Mean Per Joint Position Error (MPJPE) in mm​.

Models MPJPE
VideoPose3D 46.8
PoseFormer 44.3
MHFormer 43.0

Train the model

To train on Human3.6M:

python --train


If you find our work useful in your research, please consider citing:

  title={MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation},
  author={Li, Wenhao and Liu, Hong and Tang, Hao and Wang, Pichao and Van Gool, Luc},
  journal={arXiv preprint arXiv:2111.12707},


Our code is extended from the following repositories. We thank the authors for releasing the codes.


View Github