Python scripts for performing 6D pose estimation and shape reconstruction using the CenterSnap model in ONNX



  • The original model has not been officially released, therefore, there might be changes to the official model later on.
  • The examples seem to not properly work when using a camera other than the one in the original dataset. This is probably due to an implementation mistake on this repository, if you find the issue, please submit an issue or PR.
  • The model only works with the following objects (the objects avaialble in the training dataset): bottle, bowl, camera, can, laptop, and mug.


  • Check the requirements.txt file.
  • Additionally depthai library is necessary for testing with OAK-D boards. Check the example below for how to install it.
  • Similarly, you will need to commit the pyKinectAzure repository to run the example with the Azure Kinect. Check the example below for how to install it.


pip install -r requirements.txt

ONNX model

Download the models from here and here, and place them in the models folder. For converting the original Pytorch model to ONNX, check the following branch: https://github.com/ibaiGorordo/CenterSnap/tree/convert_onnx

How to use

The model returns multiple outputs (segmentation map, heat map, 3D position…), here is a short explanation on how to run these examples. For the image examples, you will need to download the data from the original source: https://www.dropbox.com/s/yfenvre5fhx3oda/nocs_test_subset.tar.gz

  • Image Segmentation map:

CenterSnap Semantic map

python image_draw_semantic_map.py
  • Image heatmap:

CenterSnap heatmap

python image_draw_heatmap.py
  • Image depth map (Same as input):

CenterSnap depth map

python image_draw_depth.py
  • Image projected 3D pose:

CenterSnap projected 3d pose

python image_draw_pose2d.py
  • Image 3D pose:

CenterSnap 3d pose

python image_draw_pose3d.py
  • OAK-D 3D pose:


First, install the depthai library: pip install depthai

python depthai_draw_pose3d.py
python kinect_draw_pose3d.py



View Github