Context Encoders: Feature Learning by Inpainting

This is the Pytorch implement of CVPR 2016 paper on Context Encoders



1) Semantic Inpainting Demo

  1. Install PyTorch

  2. Clone the repository

git clone
  1. Demo

    Download pre-trained model on Paris Streetview from
    Google Drive OR BaiduNetdisk

    cp netG_streetview.pth context_encoder_pytorch/model/
    cd context_encoder_pytorch/model/
    # Inpainting a batch iamges
    python --netG model/netG_streetview.pth --dataroot dataset/val --batchSize 100
    # Inpainting one image 
    python --netG model/netG_streetview.pth --test_image result/test/cropped/065_im.png

2) Train on your own dataset

  1. Build dataset

    Put your images under dataset/train,all images should under subdirectory




    Note:For Google Policy,Paris StreetView Dataset is not public data,for research using please contact with pathak22.
    You can also use The Paris Dataset to train your model

  2. Train

python --cuda --wtl2 0.999 --niter 200
  1. Test

    This step is similar to Semantic Inpainting Demo