Image Super-resolution Networks based on Structure Reparameterized Convolution
DOI:
https://doi.org/10.54691/e4mrwk86Keywords:
Super-resolution; Structure Reparameterization; Residual Network; Image Processing.Abstract
The traditional image super resolution method based on convolutional neural network generally brings two problems: First, the model has only a single scale of receptive field, which cannot use the image information in a smaller range; Second, the size of the feature map will become smaller and smaller in the process of continuous convolution, and it is necessary to continuously carry out edge zeroing operation to maintain the original size, which leads to the loss of part of the edge information. To solve the above two problems, we propose a network structure based on structure re-parameterized convolution, which sets a small convolution kernel in parallel beside a large convolution kernel in the same layer, trains the two cores simultaneously, and finally merges the two cores. The experimental results show that in this way, we make the large convolution kernel could capture smaller information, which not only improves the high-frequency details of the reconstructed image but also avoids frequent edge filling to reduce the information density, and effectively speeds up the reasoning speed after the model is re-parameterized. Compared with the advanced methods, our method achieves good results.
Downloads
References
Dong C, Loy C C, He K, et al. Learning a deep convolutional network for image super-resolution[C]. Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part IV 13. Springer International Publishing, 2014: 184-199.
Kim J, Lee J K, Lee K M. Accurate image super-resolution using very deep convolutional networks[C]. Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 1646-1654.
Wang X, Yu K, Wu S, et al. Esrgan: Enhanced super-resolution generative adversarial networks[C]. Proceedings of the European conference on computer vision (ECCV) workshops. 2018: 0-0.
Bell-Kligler S, Shocher A, Irani M. Blind super-resolution kernel estimation using an internal-gan[J]. Advances in Neural Information Processing Systems, 2019, 32.
Wang X, Xie L, Dong C, et al. Real-esrgan: Training real-world blind super-resolution with pure synthetic data[C]. Proceedings of the IEEE/CVF international conference on computer vision. 2021: 1905-1914.
Zhang Y, Li K, Li K, et al. Image super-resolution using very deep residual channel attention networks[C]. Proceedings of the European conference on computer vision (ECCV). 2018: 286-301.
Chen H, Gu J, Zhang Z. Attention in attention network for image super-resolution[J]. arxiv preprint arxiv:2104.09497, 2021.
Liang J, Cao J, Sun G, et al. Swinir: Image restoration using swin transformer[C]. Proceedings of the IEEE/CVF international conference on computer vision. 2021: 1833-1844.
Li H, Yang Y, Chang M, et al. Srdiff: Single image super-resolution with diffusion probabilistic models[J]. Neurocomputing, 2022, 479: 47-59.
Ledig C, Theis L, Huszár F, et al. Photo-realistic single image super-resolution using a generative adversarial network[C]. Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 4681-4690.
Lim B, Son S, Kim H, et al. Enhanced deep residual networks for single image super-resolution[C]. Proceedings of the IEEE conference on computer vision and pattern recognition workshops. 2017: 136-144.
Kim J, Lee J K, Lee K M. Deeply-recursive convolutional network for image super-resolution[C]. Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 1637-1645.
Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks[J]. Advances in neural information processing systems, 2012, 25.
He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]. Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778.
Luo W, Li Y, Urtasun R, et al. Understanding the effective receptive field in deep convolutional neural networks[J]. Advances in neural information processing systems, 2016, 29.
Ioffe S, Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift[C]. International conference on machine learning. pmlr, 2015: 448-456.
Ding X, Zhang X, Ma N, et al. Repvgg: Making vgg-style convnets great again[C]. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 13733-13742.
Ding X, Guo Y, Ding G, et al. Acnet: Strengthening the kernel skeletons for powerful cnn via asymmetric convolution blocks[C]. Proceedings of the IEEE/CVF international conference on computer vision. 2019: 1911-1920.
Ding X, Hao T, Tan J, et al. Resrep: Lossless cnn pruning via decoupling remembering and forgetting[C]. Proceedings of the IEEE/CVF international conference on computer vision. 2021: 4510-4520.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Frontiers in Science and Engineering

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.




