Alpha Pose is an accurate multi-person pose estimator, which is the first open-source system that achieves 70+ mAP (72.3 mAP) on COCO dataset and 80+ mAP (82.1 mAP) on MPII dataset. To match poses that correspond to the same person across frames, we also provide an efficient online pose tracker called Pose Flow. It is the first open-source online pose tracker that achieves both 60+ mAP (66.5 mAP) and 50+ MOTA (58.3 MOTA) on PoseTrack Challenge dataset.

AlphaPose supports both Linux and Windows!



Windows Version please check out doc/

  1. Get the code.
git clone -b pytorch
  1. Install pytorch 0.4.0 and other dependencies.
pip install -r requirements.txt
  1. Download the models manually: duc_se.pth (2018/08/30) (Google Drive | Baidu pan), yolov3-spp.weights(Google Drive | Baidu pan). Place them into ./models/sppe and ./models/yolo respectively.

Quick Start

  • Input dir: Run AlphaPose for all images in a folder with:
python3 --indir ${img_directory} --outdir examples/res 
  • Video: Run AlphaPose for a video and save the rendered video with:
python3 --video ${path to video} --outdir examples/res --save_video
  • Webcam: Run AlphaPose using webcam and visualize the results with:
python3 --webcam 0 --outdir examples/res --vis
  • Input list: Run AlphaPose for images in a list and save the rendered images with:
python3 --list examples/list-coco-demo.txt --indir ${img_directory} --outdir examples/res --save_img
  • Note: If you meet OOM(out of memory) problem, decreasing the pose estimation batch until the program can run on your computer:
python3 --indir ${img_directory} --outdir examples/res --posebatch 30
  • Getting more accurate: You can enable flip testing to get more accurate results by disable fast_inference, e.g.:
python3 --indir ${img_directory} --outdir examples/res --fast_inference False
  • Speeding up: Checkout the for more details.
  • Output format: Checkout the for more details.
  • For more: Checkout the for more options


Check out for faq.


Pytorch version of AlphaPose is developed and maintained by Jiefeng Li, Hao-Shu Fang and Cewu Lu.


Please cite these papers in your publications if it helps your research:

  title={{RMPE}: Regional Multi-person Pose Estimation},
  author={Fang, Hao-Shu and Xie, Shuqin and Tai, Yu-Wing and Lu, Cewu},

  author = {Xiu, Yuliang and Li, Jiefeng and Wang, Haoyu and Fang, Yinghong and Lu, Cewu},
  title = {{Pose Flow}: Efficient Online Pose Tracking},
  journal = {ArXiv e-prints},
  eprint = {1802.00977},
  year = {2018}