The code for the paper Learning Continuous Image Representation with Local Implicit Image Function.


      title={Learning Continuous Image Representation with Local Implicit Image Function}, 
      author={Yinbo Chen and Sifei Liu and Xiaolong Wang},


  • The code is under testing, will be done soon.
  • To provide visualization code.

Reproducing Experiments


  • Python 3
  • Pytorch 1.6.0
  • TensorboardX
  • yaml, numpy, tqdm, imageio


mkdir load for putting the dataset folders.

  • DIV2K: mkdir and cd into load/div2k. Download HR images and bicubic validation LR images from DIV2K website (i.e. Train_HR, Valid_HR, Valid_LR_X2, Valid_LR_X3, Valid_LR_X4). unzip these files to get the image folders.

  • benchmark datasets: mkdir and cd into load/benchmark. Download and tar -xf the benchmark datasets (provided by this repo), get the image folders Set5/, Set14/, B100/, Urban100/.

  • celebAHQ: mkdir load/celebAHQ and cp scripts/ load/celebAHQ/, then cd load/celebAHQ/. Download and unzip from the Google Drive link (provided by this repo). Run python and get image folders 256/, 128/, 64/, 32/. Download the split.json.

Running the code

0. Preliminaries

  • For or, use --gpu [GPU] to specify the GPU IDs for running (e.g. --gpu 0 or --gpu 0,1).

  • For, by default the saving folder is at save/_[CONFIG_NAME]. We can use --name to specify a name if needed.

  • For dataset args in configs, cache: in_memory denotes pre-loading into memory (may require large memory, e.g. ~40GB for DIV2K), cache: bin denotes creating binary files (in the same folder) for the first time, cache: none denotes direct loading. We can modify it according to the hardware resources before running the training scripts.

1. DIV2K experiments

Train: python --config configs/train-div2k/train_edsr-baseline-liif.yaml (with EDSR-baseline backbone, for RDN replace edsr-baseline with rdn). We use 1 GPU for training EDSR-baseline-LIIF and 4 GPUs for RDN-LIIF.

Test: bash scripts/ [MODEL_PATH] [GPU] for div2k validation set, bash scripts/ [MODEL_PATH] [GPU] for benchmark datasets. [MODEL_PATH] is the path to a .pth file, we use epoch-last.pth in corresponding saving folder.

Name Pretrained model
EDSR-baseline-LIIF Download (19M)
RDN-LIIF Download (256M)

2. celebAHQ experiments

Train: python --config configs/train-celebAHQ/[CONFIG_NAME].yaml.

Test: python --config configs/test/test-celebAHQ-32-256.yaml --model [MODEL_PATH] (or test-celebAHQ-64-128.yaml). We use epoch-best.pth in corresponding saving folder.