Abstract
In this paper we present a perceptual and error-based comparison study of the efficacy of four different deep-learned super-resolution architectures, ESPCN, SRResNet, ProGanSR and LapSRN, all performed on photo-realistic images by a factor of 4x; adapting some of the current state-of-the-art architectures using Convolutional Neural Networks (CNNs). The resultant application and the implemented CNNs are tested with objective (Peak-Signal-to-Noise ratio and Structural Similarity Index) and perceptual metrics (Mean Opinion Score testing), to judge their relative quality and implementation within the program. The results of these tests demonstrate the effectiveness of super-resolution, showing that most network implementations give an average gain of +1 to +2 dB (in PSNR), and an average gain of +0.05 to +0.1 (in SSIM) over traditional Bicubic scaling. The results of the perception test also show that participants almost always prefer the images scaled using each CNN model compared to traditional Bicubic scaling. These findings also present a look into new diverging paths in super-resolution research; where the focus is now shifting from solely error-reduction, objective-based models to perceptually focused models that satisfy human perception of a high-resolution image.
More Information
Identification Number: | https://doi.org/10.1007/978-3-030-36808-1_37 |
---|---|
Status: | Published |
Refereed: | Yes |
Publisher: | Springer Verlag (Germany) |
Additional Information: | This is a post-peer-review, pre-copyedit version of an article published in Communications in Computer and Information Science. The final authenticated version is available online at: http://dx.doi.org/10.1007/978-3-030-36808-1_37 |
Depositing User (symplectic) | Deposited by Morris, Helen |
Date Deposited: | 31 Jul 2020 12:42 |
Last Modified: | 13 Jul 2024 04:03 |
Item Type: | Book Section |
Download
Note: this is the author's final manuscript and may differ from the published version which should be used for citation purposes.
| Preview
Export Citation
Explore Further
Read more research from the author(s):