OW-DETR: Open-world Detection Transformer (CVPR 2022)
[Paper
]
Akshita Gupta*, Sanath Narayan*, K J Joseph, Salman Khan, Fahad Shahbaz Khan, Mubarak Shah
(? denotes equal contribution)
Introduction
Open-world object detection (OWOD) is a challenging computer vision problem, where the task is to detect a known set of object categories while simultaneously identifying unknown objects. Additionally, the model must incrementally learn new classes that become known in the next training episodes. Distinct from standard object detection, the OWOD setting poses significant challenges for generating quality candidate proposals on potentially unknown objects, separating the unknown objects from the background and detecting diverse unknown objects. Here, we introduce a novel end-to-end transformer-based framework, OW-DETR, for open-world object detection. The proposed OW-DETR comprises three dedicated components namely, attention-driven pseudo-labeling, novelty classification and objectness scoring to explicitly address the aforementioned OWOD challenges. Our OW-DETR explicitly encodes multi-scale contextual information, possesses less inductive bias, enables knowledge transfer from known classes to the unknown class and can better discriminate between unknown objects and background. Comprehensive experiments are performed on two benchmarks: MS-COCO and PASCAL VOC. The extensive ablations reveal the merits of our proposed contributions. Further, our model outperforms the recently introduced OWOD approach, ORE, with absolute gains ranging from $1.8%$ to $3.3%$ in terms of unknown recall on MS-COCO. In the case of incremental object detection, OW-DETR outperforms the state-of-the-art for all settings on PASCAL VOC.
Installation
Requirements
-
Linux, CUDA>=9.2, GCC>=5.4
-
Python>=3.7
We recommend you to use Anaconda to create a conda environment:
conda create -n owdetr python=3.7 pip
Then, activate the environment:
conda activate owdetr
Installation: (change cudatoolkit to your cuda version. For detailed pytorch installation instructions click here)
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=10.2 -c pytorch
-
Other requirements
pip install -r requirements.txt
Compiling CUDA operators
cd ./models/ops
sh ./make.sh
# unit test (should see all checking is True)
python test.py
Dataset preparation
OWOD paper splits
The splits are present inside data/VOC2007/OWOD/ImageSets/
folder. The remaining dataset using this link
The files should be organized in the following structure:
code_root/
└── data/
└── VOC2007/
└── OWOD/
├── JPEGImages
├── ImageSets
└── Annotations
New proposed splits
The splits are present inside data/VOC2007/OWDETR/ImageSets/
folder.
- Please download COCO 2017 dataset inside
data/
folder. - Transfer images from train2017 and val2017 folders to
data/VOC2007/OWDETR/JPEGImages/
. - Run
coco2voc.py
to convert all coco annotations to VOC format and add them todata/VOC2007/OWDETR/Annotations/
.
All the above can be skipped if coco dataloader is followed. (Update coming soon..)
The files should be organized in the following structure:
code_root/
└── data/
└── VOC2007/
└── OWDETR/
├── JPEGImages
├── ImageSets
└── Annotations
Currently, Dataloader followed for OW-DETR is in VOC format.
Training
Training on single node
Command for training OW-DETR which is based on Deformable DETR on 8 GPUs is as following:
./run.sh
Training on slurm cluster
If you are using slurm cluster, you can simply run the following command to train on 2 node with 8 GPUs each:
sbatch run_slurm.sh
Evaluation
You can get the config file and pretrained model of OW-DETR (the link is in “Results” session), then run following command to evaluate it on test set:
<path to config file> --resume <path to pre-trained model> --eval
Note: For more training and evaluation details please check the Deformable DETR reposistory.
Results
Reported results
Task1 | Task2 | Task3 | Task4 | |||||
---|---|---|---|---|---|---|---|---|
Method | URecall | mAP | URecall | mAP | URecall | mAP | URecall | mAP |
ORE-EBUI | ||||||||
OW-DETR |
Reproduced results
Task1 | Task2 | Task3 | Task4 | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Method | URecall | mAP | URL | URecall | mAP | URL | URecall | mAP | URL | URecall | mAP | URL |
ORE-EBUI | – | – | – | – | ||||||||
OW-DETR | URL | URL | URL | URL |
Improved reproduced results
Task1 | Task2 | Task3 | Task4 | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Method | URecall | mAP | URL | URecall | mAP | URL | URecall | mAP | URL | URecall | mAP | URL |
ORE-EBUI | – | – | – | – | ||||||||
OW-DETR | URL | URL | URL | URL |
License
This repository is released under the Apache 2.0 license as found in the LICENSE file.
Citation
If you use OW-DETR, please consider citing:
@inproceedings{gupta2021ow,
title={OW-DETR: Open-world Detection Transformer},
author={Gupta, Akshita and Narayan, Sanath and Joseph, KJ and
Khan, Salman and Khan, Fahad Shahbaz and Shah, Mubarak},
booktitle={CVPR},
year={2022}
}
Contact
Should you have any question, please contact ? [email protected]
Acknowledgments:
OW-DETR builds on previous works code base such as Deformable DETR, Detreg, and OWOD. If you found OW-DETR useful please consider citing these works as well.