MAED: Encoder-decoder with Multi-level Attention for 3D Human Shape and Pose Estimation

Getting Started

Our codes are implemented and tested with python 3.6 and pytorch 1.5.

Install Pytorch following the official guide on Pytorch website.

And install the requirements using virtualenv or conda:

pip install -r requirements.txt

Data Preparation

Refer to data.md for instructions.

Training

Stage 1 training

Generally, you can use the distributed launch script of pytorch to start training.

For example, for a training on 2 nodes, 4 gpus each (2×4=8 gpus total): On node 0, run:

<div class="highlight highlight-source-shell position-relative" data-snippet-clipboard-copy-content="python -u -m torch.distributed.launch \
–nnodes=2 \
–node_rank=0 \
–nproc_per_node=4 \
–master_port= \
–master_addr= \
–use_env \
train.py –cfg configs/config_stage1.yaml
“>

python -u -m torch.distributed.launch \
    --nnodes=2 \
    --node_rank=0 \
    --nproc_per_node=4 \
    --master_port=<MASTER_PORT> \
    --master_addr=<MASTER_NODE_ID> \
    --use_env \
    train.py --cfg configs/config_stage1.yaml