nsff_pl

Neural Scene Flow Fields using pytorch-lightning. This repo reimplements the NSFF idea, but modifies several operations based on observation of NSFF results and discussions with the authors. For discussion details, please see the issues of the original repo. The code is based on my previous implementation.

The main modifications are the followings:

  1. Remove the blending weight in static NeRF. I adopt the addition strategy in NeRF-W.
  2. Compose static dynamic also in image warping.

Implementation details are in models/rendering.py.

These modifications empirically produces better result on the kid-running scene, as shown below:

IMPORTANT: The code for kid-running scene is moved to nsff_orig branch (the images are still shown here just to showcase)! The master branch will be updated for custom data usage.

Full reconstruction

recon
Left: GT. Center: this repo (PSNR=35.02). Right: pretrained model of the original repo(PSNR=30.45).

Background reconstruction

121126826-c194bd00-c863-11eb-9e36-a4790455df2f
Left: this repo. Right: pretrained model of the original repo (by setting raw_blend_w to 0).

Fix-view-change-time (view 8, times from 0 to 16)

121122112-d3726200-c85b-11eb-8aaf-b4757a035280
Left: this repo. Right: pretrained model of the original repo.

Fix-time-change-view (time 8, views from 0 to 16)

121125017-cd32b480-c860-11eb-9cbf-e96674967963
Left: this repo. Right: pretrained model of the original repo.

Novel view synthesis (spiral)

The color of our method is more vivid and closer to the GT images both qualitatively and quantitatively (not because of gif compression). Also, the background is more stable and cleaner.

Bonus - Depth

Our method also produces smoother depths, although it might not have direct impact on image quality.



Top left: static depth from this repo. Top right: full depth from this repo.
Bottom left: static depth from the original repo. Bottom right: full depth from the original repo.

## GitHub

https://github.com/kwea123/nsff_pl