Real-ESRGAN

Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration.

We extend the powerful ESRGAN to a practical restoration application (namely, Real-ESRGAN), which is trained with pure synthetic data.

We have provided a pretrained model (RealESRGAN_x4plus.pth) with upsampling X4.

Note that RealESRGAN may still fail in some cases as the real-world degradations are really too complex.

Moreover, it may not perform well on human faces, text, etc, which will be optimized later.

Real-ESRGAN will be a long-term supported project (in my current plan :smiley:). It will be continuously updated
in my spare time.

Here is a TODO list in the near future:

  • [ ] optimize for human faces
  • [ ] optimize for texts
  • [ ] optimize for animation images
  • [ ] support more scales
  • [ ] support controllable restoration strength

If you have any good ideas or demands, please open an issue/discussion to let me know.

If you have some images that Real-ESRGAN could not well restored, please also open an issue/discussion. I will record it (but I cannot guarantee to resolve it:stuck_out_tongue:). If necessary, I will open a page to specially record these real-world cases that need to be solved, but the current technology is difficult to handle well.


Portable executable files

You can download Windows executable files from https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/RealESRGAN-ncnn-vulkan-20210725-windows.zip

This executable file is portable and includes all the binaries and models required. No CUDA or PyTorch environment is needed.

You can simply run the following command:

./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png

We have provided three models:

  1. realesrgan-x4plus (default)
  2. realesrnet-x4plus
  3. esrgan-x4

You can use the -n argument for other models, for example, ./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n realesrnet-x4plus

Note that it may introduce block inconsistency (and also generate slightly different results from the PyTorch implementation), because this executable file first crops the input image into several tiles, and then processes them separately, finally stitches together.

This executable file is based on the wonderful Tencent/ncnn and realsr-ncnn-vulkan by nihui.


:wrench: Dependencies and Installation

Installation

  1. Clone repo

    git clone https://github.com/xinntao/Real-ESRGAN.git
    cd Real-ESRGAN
    
  2. Install dependent packages

    # Install basicsr - https://github.com/xinntao/BasicSR
    # We use BasicSR for both training and inference
    pip install basicsr
    pip install -r requirements.txt
    

:zap: Quick Inference

Download pre-trained models: RealESRGAN_x4plus.pth

Download pretrained models:

wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P experiments/pretrained_models

Inference!

python inference_realesrgan.py --model_path experiments/pretrained_models/RealESRGAN_x4plus.pth --input inputs

Results are in the results folder

:european_castle: Model Zoo

:computer: Training

A detailed guide can be found in Training.md.

BibTeX

@Article{wang2021realesrgan,
    title={Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data},
    author={Xintao Wang and Liangbin Xie and Chao Dong and Ying Shan},
    journal={arXiv:2107.10833},
    year={2021}
}

:e-mail: Contact

If you have any question, please email [email protected] or [email protected].

GitHub

https://github.com/xinntao/Real-ESRGAN