Authors
Jiahui Yu, Yuchen Fan, Jianchao Yang, Ning Xu, Zhaowen Wang, Xinchao Wang, Thomas Huang
Publication date
2018/8/27
Journal
BMVC 2019 (challenge report of the winning solution in NTIRE Challenge on Single Image Super-Resolution, CVPR 2018)
Description
In this report we demonstrate that with same parameters and computational budgets, models with wider features before ReLU activation have significantly better performance for single image super-resolution (SISR). The resulted SR residual network has a slim identity mapping pathway with wider ( to ) channels before activation in each residual block. To further widen activation ( to ) without computational overhead, we introduce linear low-rank convolution into SR networks and achieve even better accuracy-efficiency tradeoffs. In addition, compared with batch normalization or no normalization, we find training with weight normalization leads to better accuracy for deep super-resolution networks. Our proposed SR network \textit{WDSR} achieves better results on large-scale DIV2K image super-resolution benchmark in terms of PSNR with same or lower computational complexity. Based on WDSR, our method also won 1st places in NTIRE 2018 Challenge on Single Image Super-Resolution in all three realistic tracks. Experiments and ablation studies support the importance of wide activation for image super-resolution. Code is released at: https://github.com/JiahuiYu/wdsr_ntire2018
Total citations
20182019202020212022202320242529011610510236
Scholar articles
J Yu, Y Fan, J Yang, N Xu, Z Wang, X Wang, T Huang - arXiv preprint arXiv:1808.08718, 2018
J Yu, Y Fan, T Huang - 30th British Machine Vision Conference, BMVC 2019, 2020