Instant Neural Surface Reconstruction
This repository contains a concise and extensible implementation of NeRF and NeuS for neural surface reconstruction based on Instant-NGP and the Pytorch-Lightning framework. Training on a NeRF-Synthetic scene takes ~5min for NeRF and ~10min for NeuS on a single RTX3090.
|NeRF in 5min||NeuS in 10 min|
This repository aims to provide a highly efficient while customizable boilerplate for research projects based on NeRF or NeuS.
- Acceleration techniques from Instant-NGP: multiresolution hash encoding and fully fused networks by tiny-cuda-nn, occupancy grid pruning and rendering by nerfacc
- out-of-the-box multi-GPU and mixed precision training by PyTorch-Lightning
- hierarchical project layout that is designed to be easily customized and extended, flexible experiment configuration by OmegaConf
Note: to utilize multiresolution hash encoding or fully fused networks provided by tiny-cuda-nn, you should have least an RTX 2080Ti, see https://github.com/NVlabs/tiny-cuda-nn#requirements for more details.
- Install PyTorch>=1.10 here based the package management tool you used and your cuda version (older PyTorch versions may work but have not been tested)
- Install tiny-cuda-nn PyTorch extension:
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
pip install -r requirements.txt
Training on NeRF-Synthetic
Download the NeRF-Synthetic data here and put it under
load/. The file structure should be like
Run the launch script with
--train, specifying the config file, the GPU(s) to be used (GPU 0 will be used by default), and the scece name:
# train NeRF python launch.py --config configs/nerf-blender.yaml --gpu 0 --train dataset.scene=lego tag=example # train NeuS with mask python launch.py --config configs/neus-blender.yaml --gpu 0 --train dataset.scene=lego tag=example # train NeuS without mask python launch.py --config configs/neus-blender.yaml --gpu 0 --train dataset.scene=lego tag=example system.loss.lambda_mask=0.0
The code snapshots, checkpoints and experiment outputs are saved to
exp/[name]/[tag]@[timestamp], and tensorboard logs can be found at
runs/[name]/[tag]@[timestamp]. You can change any configuration in the YAML file by specifying arguments without
--, for exmaple:
python launch.py --config configs/nerf-blender.yaml --gpu 0 --train dataset.scene=lego tag=iter50k seed=0 trainer.max_steps=50000
The training precedure are by default followed by testing, which computes metrics on test data, generates animations and exports the geometry as triangular meshes. If you want to do testing alone, just resume the pretrained model and replace
--test, for example:
python launch.py --config path/to/your/exp/config/parsed.yaml --resume path/to/your/exp/ckpt/epoch=0-step=20000 --gpu 0 --test
All experiments are conducted on a single NVIDIA RTX3090.
|NeuS Ours (with mask)||33.14||24.74||28.61||34.39||29.78||26.71||32.60||26.85||29.60|
|Training Time (mm:ss)||Chair||Drums||Ficus||Hotdog||Lego||Materials||Mic||Ship||Avg.|
|NeuS Ours (with mask)||08:50||09:01||08:53||09:19||09:37||09:17||08:17||11:53||09:23|
- Support more dataset formats, like COLMAP outputs and DTU
- Support background model based on NeRF++ or Mip-NeRF360
- Support GUI training and interaction
- More illustrations about the framework
- ngp_pl: Great Instant-NGP implementation in PyTorch-Lightning! Background model and GUI supported.
- Instant-NSR: NeuS implementation using multiresolution hash encoding.