Neural Hash Encoding

This is a work in progress reimplementation of Instant Neural Graphics Primitives
Currently this can train an implicit representation of a gigapixel image using a multires hash encoding.

FYI: This is brand new — most parameters in the training script are hard coded right now

Check out results in viz


Download the Tokyo image

wget -O tokyo.jpg

Convert to numpy binary format for faster reading (1s w/ .npz vs 14s with .jpg)

from PIL import Image
Image.MAX_IMAGE_PIXELS = 10**10

img = np.asarray("tokyo.jpg"))   # Abount 3.5 gb"tokyo.npy", img)


python src/

Implementation Notes (From the Paper)


In all tasks, except for NeRF which we will
describe later, we use an MLP with two hidden layers that have
a width of 64 neurons and rectified linear unit (ReLU)

4. Initialization

  • Initialize hash table entries with uniform distribution [-1e-4, 1e-4]

4. Training

  • Optimizer
  • Regularization:
    • L2: 10e-6 Applied to the MLP weigths not the hash table weights

we skip Adam steps for hash table entries whose gradient
is exactly 0. This saves ∼10% performance when gradients are sparse


View Github