Visualization-of-Human3.6M-Dataset

Plot and save the ground truth and predicted results of human 3.6 M and CMU mocap dataset.

human-motion-prediction

This is the code for visulalizing the ground truth and predicted results of human 3.6M dataset.

To save the gif for ground truth data, ru

python forward_kinematics.py --save --save_name "figs/walking.gif"

To save visualization for trained modeld sample.h5, run

 python forward_kinematics.py --sample_name samples.h5 --save --save_name "figs/walking_py_0.gif"

Finally, to visualize the samples run

python forward_kinematics.py

This should create a visualization similar to this one

In data folder it has only subject 5 due to space constraint.

To download full dataset follow this

wget http://www.cs.stanford.edu/people/ashesh/h3.6m.zip

Acknowledgments

Julieta Martinez, Michael J. Black, Javier Romero.
On human motion prediction using recurrent neural networks. In CVPR 17.

It can be found on arxiv as well: https://arxiv.org/pdf/1705.02445.pdf

The code in this repository was written by Julieta Martinez and Javier Romero.

Thank you

Gaurav

GitHub

View Github