We address the problem of estimating depth with multi modal audio visual data. Inspired by the ability of animals, such as bats and dolphins, to infer distance of objects with echolocation, we propose an end-to-end deep learning based pipeline utilizing RGB images, binaural echoes and estimated material properties of various objects within a scene for the task of depth estimation.
The code is tesed with
- Python 3.6 - PyTorch 1.6.0 - Numpy 1.19.5
Replica-VisualEchoes can be obatined from here. We have used the 128x128 image resolution for our experiment.
MatterportEchoes is an extension of existing matterport3D dataset. In order to obtain the raw frames please forward the access request acceptance from the authors of matterport3D dataset. We will release the procedure to obtain the frames and echoes using habitat-sim and soundspaces in near future.
We have provided pre-trained model for both the datasets here. For each of the dataset four different parts of the model are saved individually with name
* represents the name of the dataset, i.e.
To train the model, first download the pre-trained material net from above link.
python train.py \ --validation_on \ --dataset mp3d \ --img_path path_to_img_folder \ --metadatapath path_to_metadata \ --audio_path path_to_audio_folder \ --checkpoints_dir path_to_save_checkpoints \ --init_material_weight path_to_pre-trained_material_net
To evaluate the method using the pre-trained model, download the models for the corresponding dataset and the dataset.
- Evalution for Replica dataset
python test.py \ --img_path path_to_img_folder \ --audio_path path_to_audio_data \ --checkpoints_dir path_to_the_pretrained_model \ --dataset replica
- Evaluation for Matterport3D dataset
python test.py \ --img_path path_to_img_folder \ --audio_path path_to_audio_data \ --checkpoints_dir path_to_the_pretrained_model \ --dataset mp3d