Paper

Khoi Nguyen, Sinisa Todorovic “A Weakly Supervised Amodal Segmenter with Boundary Uncertainty Estimation“, accepted to ICCV 2021

Our code is mainly based on the code from the paper: Xiaohang Zhan, Xingang Pan, Bo Dai, Ziwei Liu, Dahua Lin, Chen Change Loy, “Self-Supervised Scene De-occlusion

Requirements

  • pytorch>=0.4.1

    pip install -r requirements.txt

Data Preparation

COCOA dataset proposed in Semantic Amodal Segmentation.

  1. Download COCO2014 train and val images from here and unzip.

  2. Download COCOA annotations from here and untar.

  3. Ensure the COCOA folder looks like:

    COCOA/
      |-- train2014/
      |-- val2014/
      |-- annotations/
        |-- COCO_amodal_train2014.json
        |-- COCO_amodal_val2014.json
        |-- COCO_amodal_test2014.json
        |-- ...
    
  4. Create symbolic link:

    cd deocclusion
    mkdir data
    cd data
    ln -s /path/to/COCOA
    

KINS dataset proposed in Amodal Instance Segmentation with KINS Dataset.

  1. Download left color images of object data in KITTI dataset from here and unzip.

  2. Download KINS annotations from here corresponding to this commit.

  3. Ensure the KINS folder looks like:

    KINS/
      |-- training/image_2/
      |-- testing/image_2/
      |-- instances_train.json
      |-- instances_val.json
    
  4. Create symbolic link:

    cd deocclusion/data
    ln -s /path/to/KINS
    

Train

train PCNet-M

  1. Train (taking COCOA for example).

    ./train_pcnet_m_std_no_rgb_gaussian.sh
    
  2. Monitoring status and visual results using tensorboard.

    sh tensorboard.sh $PORT
    

Evaluate

  • Execute:

    ./test_pcnet_m.sh

Bibtex

@InProceedings{Nguyen_2021_ICCV,
    author    = {Nguyen, Khoi and Todorovic, Sinisa},
    title     = {A Weakly Supervised Amodal Segmenter With Boundary Uncertainty Estimation},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {7396-7405}
}

Acknowledgement

  1. We developed our approach based on the code from https://github.com/XiaohangZhan/deocclusion/

  2. We used the code and models of GCA-Matting in our demo.

  3. We modified some code from pytorch-inpainting-with-partial-conv to train the PCNet-C.

GitHub

View Github