Panoramic BlitzNet

Tensorflow 2.x implementation of Panoramic BlitzNet for object detection and semantic segmentation on indoor panoramic images.

Introduction

This repository contains an original implementation of the paper: ‘What’s in my Room? Object Recognition on Indoor Panoramic Images’ by Julia Guerrero-Viu, Clara Fernandez-Labrador, Cédric Demonceaux and José J. Guerrero. More info can be found in our project page

Our implementation is based on the previous work of Dvornik et al. BlitzNet which code can be found in their webpage

Use Instructions

We recommend the use of a virtual enviroment for the use of this project. (e.g. anaconda)

$ conda new -n envname python=3.8.5 # replace envname with your prefered name

Install Requirements

1. This code has been compiled and tested using:

  • python 3.8.5
  • cuda 10.1
  • cuDNN 7.6
  • TensorFlow 2.3

You are free to try different configurations but we do not ensure it had been tested.

2. Install python requirements:

(envname)$ pip install -r requirements.txt

Download Dataset

SUN360: download

Copy the folder ‘dataset’ to the folder where you have the repository files.

Download Model

download

Download the folder ‘Checkpoints’ which includes the model weights and copy it to the folder where you have the repository files.

Test run

Ensure the folders ‘dataset’ and ‘Checkpoints’ are in the same folder than the python files.

To run our demo please run:

(envname)$ python3 test.py PanoBlitznet # Runs the test examples and saves results in 'Results' folder

Training and evaluation

If you want to train the model changing some parameters and evaluate the results follow the next steps:

1. Create a TFDS from SUN360:

Do this ONLY if it is the first time using this repository.

Ensure the folder ‘dataset’ is in the same folder than the python files.

Change the line 86 in sun360.py file with your path to the ‘dataset’ folder.

(envname)$ cd /path/to/project/folder
(envname)$ tfds build sun360.py # Creates a TFDS (Tensorflow Datasets) from SUN360

2. Train a model:

To train a model change the parameters you want in the config.py file. You are free to try different configurations but we do not ensure it had been tested.

<div class="snippet-clipboard-content position-relative overflow-auto" data-snippet-clipboard-copy-content="Usage: training_loop.py [–restore_ckpt]

Options:
-h –help Show this screen.
–restore_ckpt Restore weights from previous training to continue with the training.”>

Usage: training_loop.py 
    
    
      [--restore_ckpt]

Options:
	-h --help  Show this screen.
	--restore_ckpt  Restore weights from previous training to continue with the training.