Wide Activation for Efficient and Accurate Image Super-Resolution.
- Training and Validation:
|Network||Parameters||DIV2K (val) PSNR|
We measured PSNR using DIV2K 0801 ~ 0900 (trained on 0000 ~ 0800) on RGB channels without self-ensemble which is identical to EDSR baseline model settings. Both baseline models have 16 residual blocks.
|Number of Residual Blocks||1||3|
|DIV2K (val) PSNR||33.210||33.323||33.434||34.043||34.163||34.205|
|Number of Residual Blocks||5||8|
|DIV2K (val) PSNR||34.284||34.388||34.409||34.457||34.541||34.536|
Comparisons of EDSR and our proposed WDSR-A, WDSR-B for image bicubic x2 super-resolution on DIV2K dataset.
WDSR Network Architecture
Left: vanilla residual block in EDSR. Middle: wide activation. Right: wider activation with linear low-rank convolution. The proposed wide activation WDSR-A, WDSR-B have similar merits with MobileNet V2 but different architectures and much better PSNR.
Weight Normalization vs. Batch Normalization and No Normalization
Training loss and validation PSNR with weight normalization, batch normalization or no normalization. Training with weight normalization has faster convergence and better accuracy.
Subscribe to Python Awesome
Get the latest posts delivered right to your inbox