nerf-meta is a PyTorch re-implementation of NeRF experiments from the paper "Learned Initializations for Optimizing Coordinate-Based Neural Representations". Simply by initializing NeRF with meta-learned weights, we can achieve:


  • Python 3.8
  • PyTorch 1.8
  • NumPy, imageio, imageio-ffmpeg

Photo Tourism

Starting from a meta-initialized NeRF, we can interpolate between camera pose, focal length, aspect ratio and scene appearance. The videos below are generated with a 5 layer only NeRF, trained for ~100k iterations.


Train and Evaluate

  1. Train NeRF on a single landmark scene using Reptile meta-learning:

    python --config ./configs/tourism/$landmark.json
  2. Test Photo Tourism performance and generate an interpolation video of the landmark:

    python --config ./configs/tourism/$landmark.json --weight-path $meta_weight.pth

View Synthesis from Single Image

Given a single input view, meta-initialized NeRF can generate a 360-degree video. The following ShapeNet video is generated with a class-specific NeRF (5 layers deep), trained for ~100k iterations.


Train and Evaluate

  1. Train NeRF on a particular ShapeNet class using Reptile meta-learning:

    python --config ./configs/shapenet/$shape.json
  2. Optimize the meta-trained model on a single view and test on other held-out views. It also generates a 360 video for each test object:

    python --config ./configs/shapenet/$shape.json --weight-path $meta_weight.pth


I referenced several open-source NeRF and Meta-Learning code base for this implementation. Specifically, I borrowed/modified code from the following repositories:

Thanks to the authors for releasing their code.