Neural Motion Learner


This work is to extract skeletal structure from volumetric observations and to learn motion dynamics from the detected skeletal motions in a fully unsupervised manner.

Our model conducts motion generation/interpolation/retargeting based on the learned latent dynamics.

Note that it is an unofficial version of the work so that minimal amounts of codes are provided to demonstrate results.

Full descriptions including title, training codes and data pre-processing methods will be uploaded once the paper of this work is accepted to the conference.


We tested on Python 3.8 and Ubuntu 18.04 LTS.

The architecture is built from Pytorch 1.7.1 with Cuda 11.0.

Creating a conda environment is recommended.

## Download the repository
git clone
cd neural_motion_learner
## Create conda env
conda create --name nmotion python=3.8
conda activate nmotion
## modify to match your cuda setting


Using provided pretrained model, run demo codes to visualize followings:

## Motion generation
## Result will be stored in output/generation

Gen Video

## Motion interpolation
## Result will be stored in output/interpolation

Interp Video

## Motion retargeting
## Result will be stored in output/retarget

Retarget Video


View Github