rgbd-kinect-pose
Real-time RGBD-based Extended Body Pose Estimation
The output of our module is in SMPL-X parametric body mesh model:
- RNN estimates body pose from joints detected by Azure Kinect Body Tracking API
- For face (expression and jaw) and hand pose we crop from rgb image:
- for hand model we use minimal-hand
- our face NN takes media-pipe keypoints as input
Combined system runs at 30 fps on a 2080ti GPU and 8 core @ 4GHz CPU.
How to use
Build
- Prereqs: your nvidia driver should support cuda 10.2, Windows or Mac are not supported.
- Clone repo:
git clone
https://github.com/rmbashirov/rgbd-kinect-pose.git
cd rgbd-kinect-pose
git submodule update --force --init --remote
- Docker setup:
-
Set nvidia your default runtime for docker
-
Make docker run without sudo: create docker group and add current user to it:
sudo groupadd docker sudo usermod -aG docker $USER
-
reboot
- Build docker image: run 2 cmds
- Attach your Azure Kinect camera
- Check your Azure Kinect camera is working inside Docker container:
- Enter Docker container:
./run_local.sh
fromdocker
dir - Then run
python -m pyk4a.viewer --vis_color --no_bt --no_depth
inside docker container
- Enter Docker container:
Download data
- Download our data archive
smplx_kinect_demo_data.tar.gz
- Unzip:
mkdir /your/unpacked/dir
,tar -zxf smplx_kinect_demo_data.tar.gz -C /your/unpacked/dir
- Download models for hand, see link in "Download models from here" line in our fork, put to
/your/unpacked/dir/minimal_hand/model
- To download SMPL-X parametric body model go to this project website, register, go to the downloads section, download SMPL-X v1.1 model, put to
/your/unpacked/dir/pykinect/body_models/smplx
/your/unpacked/dir
should look like this- Set
data_dirpath
andoutput_dirpath
variables in config file:data_dirpath
is a path to/your/unpacked/dir
output_dirpath
is used to check timings or to store result images- ensure these paths are visible inside docker container, set
VOLUMES
variable here
Run
- Run demo: in
src
dir run./run_server.sh
, the latter will enter docker container and will use config file where shape of the person is loaded from an external file: in our work we did not focus on person's shape estimation