Taichi NGP Renderer
Update 2022-10-23: Support depth of field (DoF)
This is an Instant-NGP renderer implemented using Taichi, written entirely in Python. No CUDA! This repository only implemented the rendering part of the NGP but is more simple and has a lesser amount of code compared to the original (Instant-NGP and tiny-cuda-nn).
Install Taichi to get started:
pip install -i https://pypi.taichi.graphics/simple/ taichi-nightly
Note There is a bug in Taichi codegen that a
uint32value can not properly work on a modulo operation. You have to install nightly version in which they have this bug fixed. See detail in issue #6118
Clone this repository and install the required package:
git clone https://github.com/Linyou/taichi-ngp-renderer.git python -m pip install -r requirement.txt
This repository only implemented the forward part of the Instant-NGP, which include:
- Rays intersection with bounding box:
- Ray marching strategic:
- Spherical harmonics encoding for ray direction:
- Hash table encoding for 3d coordinate:
- Fully Fused MLP using shared memory:
- Volume rendering:
However, there are some differences compared to the original:
- Taichi is currently missing the
frexp()method, so I have to use a hard-coded scale of 0.5. I will update the code once Taichi supports this function.
Fully Fused MLP
- Instead of having a single kernel like tiny-cuda-nn, this repo use separated kernel
rgb_layer()because the shared memory size that Taichi currently allow is
48KBas issue #6385 points out, it could be improved in the future.
- In the tiny-cuda-nn, they use TensorCore for
float16multiplication, which is not an accessible feature for Taichi, so I directly convert all the data to
ti.float16to speed up the computation.
This code supports real-time rendering GUI interactions with less than 1GB VRAM. Here are the functionality that the GUI offers:
- keyboard and mouse control
- different resolution
- the number of samples for each ray
- transparency threshold (Stop ray marching)
- show depth
- black and white background
- Video recording (Required ffmpeg)
the GUI is running up to 26 fps on a 3090 GPU at 800 $\times$ 800 resolution. In order to get 26 fps speed without damaging the visual quality, the
max_samplesis set to 0.1 and 50, respectively.
python taichi_ngp.py --gui --scene lego to start the GUI. This repository provided eight pre-trained NeRF synthesis scenes: Lego, Ship, Mic, Materials, Hotdog, Ficus, Drums, Chair
python taichi_ngp.py --gui --scene <name> will automatically download pre-trained model
<name> in the
./npy_file folder. Please check out the argument parameters in
taichi_ngp.py for more options.
You can train a new scene with ngp_pl and save the pytorch model to numpy using
np.save(). After that, use the
--model_path argument to specify the model file.
- Refactor to separate modules
- Support Vulkan backend
- Support real scenes