RoRD

Rotation-Robust Descriptors and Orthographic Views for Local Feature Matching

Evaluation and Datasets

Pretrained Models

Download models from Google Drive (73.9 MB) in the base directory.

Evaluating RoRD

You can evaluate RoRD on demo images or replace it with your custom images.

  1. Dependencies can be installed in a conda of virtualenv by running:
    1. pip install -r requirements.txt
  2. python extractMatch.py <rgb_image1> <rgb_image2> --model_file <path to the model file RoRD>
  3. Example:
    python extractMatch.py demo/rgb/rgb1_1.jpg demo/rgb/rgb1_2.jpg --model_file models/rord.pth
  4. This should give you output like this:

RoRD

pipeline

SIFT

pipeline

DiverseView Dataset

Download dataset from Google Drive (97.8 MB) in the base directory (only needed if you want to evaluate on DiverseView Dataset).

Evaluation on DiverseView Dataset

The DiverseView Dataset is a custom dataset consisting of 4 scenes with images having high-angle camera rotations and viewpoint changes.

  1. Pose estimation on single image pair of DiverseView dataset:
    1. cd demo
    2. python register.py --rgb1 <path to rgb image 1> --rgb2 <path to rgb image 2> --depth1 <path to depth image 1> --depth2 <path to depth image 2> --model_rord <path to the model file RoRD>
    3. Example:
      python register.py --rgb1 rgb/rgb2_1.jpg --rgb2 rgb/rgb2_2.jpg --depth1 depth/depth2_1.png --depth2 depth/depth2_2.png --model_rord ../models/rord.pth
    4. This should give you output like this:

RoRD matches in perspective view

pipeline

RoRD matches in orthographic view

pipeline
  1. To visualize the registered point cloud, use --viz3d command:
    1. python register.py --rgb1 rgb/rgb2_1.jpg --rgb2 rgb/rgb2_2.jpg --depth1 depth/depth2_1.png --depth2 depth/depth2_2.png --model_rord ../models/rord.pth --viz3d

PointCloud registration using correspondences

pipeline
  1. Pose estimation on a sequence of DiverseView dataset:
    1. cd evaluation/DiverseView/
    2. python evalRT.py --dataset <path to DiverseView dataset> --sequence <sequence name> --model_rord <path to RoRD model> --output_dir <name of output dir>
    3. Example:
      1. python evalRT.py --dataset /path/to/preprocessed/ --sequence data1 --model_rord ../../models/rord.pth --output_dir out
    4. This would generate out folder containing predicted transformations and matching results in out/vis folder, containing images like below:

RoRD

pipeline

Training RoRD on PhotoTourism Images

  1. Training using rotation homographies with initialization from D2Net weights (Download base models as mentioned in Pretrained Models).

  2. Download branderburg_gate dataset that is used in the configs/train_scenes_small.txt from here(5.3 Gb) in phototourism folder.

  3. Folder stucture should be:

    phototourism/  
    ___ brandenburg_gate  
    ___ ___ dense  
    ___ ___	___ images  
    ___ ___	___ stereo  
    ___ ___	___ sparse  
    
  4. python trainPT_ipr.py --dataset_path <path_to_phototourism_folder> --init_model models/d2net.pth --plot

TO-DO

  • [ ] Provide VPR code
  • [ ] Provide combine training of RoRD + D2Net
  • [ ] Provide code for calculating error in Diverseview Dataset

Credits

Our base model is borrowed from D2-Net.

BibTex

If you use this code in your project, please cite the following paper:


@misc{rord2021,
      title={RoRD: Rotation-Robust Descriptors and Orthographic Views for Local Feature Matching}, 
      author={Udit Singh Parihar and Aniket Gujarathi and Kinal Mehta and Satyajit Tourani and Sourav Garg and Michael Milford and K. Madhava Krishna},
      year={2021},
      eprint={2103.08573},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

GitHub

https://github.com/UditSinghParihar/RoRD