Putting NeRF on a Diet
This project attempted to implement the paper Putting NeRF on a Diet (DietNeRF) in JAX/Flax. DietNeRF is designed for rendering quality novel views in few-shot learning scheme, a task that vanilla NeRF (Neural Radiance Field) struggles. To achieve this, the author coins Semantic Consistency Loss to supervise DietNeRF by prior knowledge from CLIP Vision Transformer. Such supervision enables DietNeRF to learn 3D scene reconstruction with CLIP's prior knowledge on 2D views.
- You can check out our demo in Hugging Face Space
- Or you can set up our Streamlit demo locally (model checkpoints will be fetched automatically upon startup)
pip install -r requirements_demo.txt streamlit run app.py
Our code is written in JAX/ Flax and mainly based upon jaxnerf from Google Research. The base code is highly optimized in GPU & TPU. For semantic consistency loss, we utilize pretrained CLIP Vision Transformer from transformers library.
To learn more about DietNeRF, our experiments and implementation, you are highly recommended to check out our very detailed Notion write-up!
? Hugging Face Model Hub Repo
You can also find our project and our model checkpoints on our Hugging Face Model Hub Repository. The models checkpoints are located in
Our JAX/Flax implementation currently supports:
|Platform||Single-Host GPU||Multi-Device TPU|
# Clone the repo git clone https://github.com/codestella/putting-nerf-on-a-diet # Create a conda environment, note you can use python 3.6-3.8 as # one of the dependencies (TensorFlow) hasn't supported python 3.9 yet. conda create --name jaxnerf python=3.6.12; conda activate jaxnerf # Prepare pip conda install pip; pip install --upgrade pip # Install requirements pip install -r requirements.txt # [Optional] Install GPU and TPU support for Jax # Remember to change cuda101 to your CUDA version, e.g. cuda110 for CUDA 11.0. !pip install --upgrade jax "jax[cuda110]" -f https://storage.googleapis.com/jax-releases/jax_releases.html # install flax and flax-transformer pip install flax transformers[flax]
Download the datasets from the NeRF official Google Drive.
Please download the
nerf_synthetic.zip and unzip them
in the place you like. Let's assume they are placed under
? How to Train
- Train in our prepared Colab notebook: Colab Pro is recommended, otherwise you may encounter out-of-memory
- Train locally: set
yamlconfiguration file to enable DietNeRF.
python -m train \ --data_dir=/PATH/TO/YOUR/SCENE/DATA \ # (e.g. nerf_synthetic/lego) --train_dir=/PATH/TO/THE/PLACE/YOU/WANT/TO/SAVE/CHECKPOINTS \ --config=configs/CONFIG_YOU_LIKE
? Experimental Results
❗ Rendered Rendering images by 8-shot learned DietNeRF
DietNeRF has a strong capacity to generalise on novel and challenging views with EXTREMELY SMALL TRAINING SAMPLES!
HOTDOG / DRUM / SHIP / CHAIR / LEGO / MIC
❗ Rendered GIF by occluded 14-shot learned NeRF and Diet-NeRF
We made artificial occlusion on the right side of image (Only picked left side training poses).
The reconstruction quality can be compared with this experiment.
DietNeRF shows better quality than Original NeRF when It is occluded.