nerf-meta is a PyTorch re-implementation of NeRF experiments from the paper "Learned Initializations for Optimizing Coordinate-Based Neural Representations". Simply by initializing NeRF with meta-learned weights, we can achieve:
- Python 3.8
- PyTorch 1.8
- NumPy, imageio, imageio-ffmpeg
Starting from a meta-initialized NeRF, we can interpolate between camera pose, focal length, aspect ratio and scene appearance. The videos below are generated with a 5 layer only NeRF, trained for ~100k iterations.
- Get image collection of different landmarks from image-matching-challenge
- Get poses and bounds from learnit google drive
Train and Evaluate
Train NeRF on a single landmark scene using Reptile meta-learning:
python tourism_train.py --config ./configs/tourism/$landmark.json
Test Photo Tourism performance and generate an interpolation video of the landmark:
python tourism_test.py --config ./configs/tourism/$landmark.json --weight-path $meta_weight.pth
View Synthesis from Single Image
Given a single input view, meta-initialized NeRF can generate a 360-degree video. The following ShapeNet video is generated with a class-specific NeRF (5 layers deep), trained for ~100k iterations.
- Get ShapeNet Data and splits files from learnit google drive
Train and Evaluate
Train NeRF on a particular ShapeNet class using Reptile meta-learning:
python shapenet_train.py --config ./configs/shapenet/$shape.json
Optimize the meta-trained model on a single view and test on other held-out views. It also generates a 360 video for each test object:
python shapenet_test.py --config ./configs/shapenet/$shape.json --weight-path $meta_weight.pth
I referenced several open-source NeRF and Meta-Learning code base for this implementation. Specifically, I borrowed/modified code from the following repositories:
Thanks to the authors for releasing their code.