NLOS-OT

Official implementation of NLOS-OT: Passive Non-Line-of-Sight Imaging Using Optimal Transport (IEEE TIP, accepted)

Description

In this repository, we release the NLOS-OT code in Pytorch as well as the passive NLOS imaging dataset NLOS-Passive.

  • Problem statement: Passive NLOS imaging

  • NLOS-OT architecture

  • The reconstruction results of NLOS-OT trained by specific dataset without partial occluder

  • The generalization results of NLOS-OT trained by dataset only from STL-10 with unknown partial occluder

Installation

  1. install required packages

  2. clone the repo

Prepare Data

  1. Download dataset

You can download each group in NLOS-Passive through the link below. Please note that a compressed package (.zip or .z01+.zip) represents a group of measured data.

link:https://pan.baidu.com/s/19Q48BWm1aJQhIt6BF9z-uQ

code:j3p2

If the link fails, please feel free to contact me.

  1. Organize the files structure of the dataset

Demo / Evaluate

Before that, you should have installed the required packages and organized the data set according to the appropriate file structure.

  1. Download pretrained pth

  2. run the test.py

Train

Before that, you should have installed the required packages and organized the data set according to the appropriate file structure.

Citation

If you find our work and code helpful, please consider cite:

We thank the following great works:

  • DeblurGAN, pix2pix: Our code is based on the framework provided by the two repos.

  • IntroVAE: The encoder and decoder in NLOS-OT are based on IntroVAE.

  • , AEOT-GAN: The idea of using OT to complete passive NLOS imaging tasks in NLOS-OT comes from the two works.

If you find them helpful, please cite:

@inproceedings{kupynDeblurGANBlindMotion2018,
	title = {{DeblurGAN}: {Blind} {Motion} {Deblurring} {Using} {Conditional} {Adversarial} {Networks}},
	booktitle = {2018 {IEEE} {Conference} on {Computer} {Vision} and {Pattern} {Recognition} ({CVPR})},
	author = {Kupyn, Orest and Budzan, Volodymyr and Mykhailych, Mykola and Mishkin, Dmytro and Matas, Jiri},
	year = {2018},
}

@inproceedings{isolaImagetoimageTranslationConditional2017,
	title = {Image-to-image translation with conditional adversarial networks},
	booktitle = {2017 {IEEE} {Conference} on {Computer} {Vision} and {Pattern} {Recognition} ({CVPR})},
	publisher = {IEEE},
	author = {Isola, Phillip and Zhu, Jun-Yan and Zhou, Tinghui and Efros, Alexei A.},
	year = {2017},
	pages = {5967--5976},
}

@inproceedings{huang_introvae_2018,
	title = {{IntroVAE}: {Introspective} {Variational} {Autoencoders} for {Photographic} {Image} {Synthesis}},
	shorttitle = {{IntroVAE}},
	urldate = {2020-07-14},
	booktitle = {Advances in neural information processing systems},
	author = {Huang, Huaibo and Li, Zhihang and He, Ran and Sun, Zhenan and Tan, Tieniu},
	month = oct,
	year = {2018}
}

@article{an_ae-ot-gan_2020,
	title = {{AE}-{OT}-{GAN}: {Training} {Gans} from {Data} {Specific} {Latent} {Distribution}},
	shorttitle = {Ae-{Ot}-{Gan}},
	journal = {arXiv},
	author = {An, Dongsheng and Guo, Yang and Zhang, Min and Qi, Xin and Lei, Na and Yau, Shing-Tung and Gu, Xianfeng},
	year = {2020}
}

@inproceedings{an_ae-ot_2020,
	title = {{AE}-{OT}: {A} {NEW} {GENERATIVE} {MODEL} {BASED} {ON} {EX}- {TENDED} {SEMI}-{DISCRETE} {OPTIMAL} {TRANSPORT}},
	language = {en},
	author = {An, Dongsheng and Guo, Yang and Lei, Na and Luo, Zhongxuan and Yau, Shing-Tung and Gu, Xianfeng},
	year = {2020},
	pages = {19},
}

GitHub

View Github