Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Single image super-resolution via a ternary attention network

Single image super-resolution via a ternary attention network Recently, the deep convolutional neural network (CNN) has been widely explored in single image super-resolution (SISR) and achieves excellent performance. However, most of the existing CNN-based SISR methods mainly focus on wider or deeper architecture design, ignoring the internal dependencies between the features of different layers in the network and the intrinsic statistical properties of the feature maps. It hinders the maximization of the network representation power. We propose a ternary attention mechanism network (TAN) for effective feature extraction and feature correlation learning to address this issue. Specifically, we introduced a layer attention mechanism(LAM) to fully use the features generated by each layer of the network. Furthermore, we present a spatial attention mechanism(SAM) that uses the internal statistical characteristics of the features to enhance themself. Finally, we design a new channel attention mechanism(CAM) to ensure the feature diversity in channel dimensions. Extensive experiments show that our TAN achieves better both quantitative metrics and visual quality compared with state-of-the-art methods. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Applied Intelligence Springer Journals

Single image super-resolution via a ternary attention network

Loading next page...
 
/lp/springer-journals/single-image-super-resolution-via-a-ternary-attention-network-TcNl1vqfre

References (47)

Publisher
Springer Journals
Copyright
Copyright © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
ISSN
0924-669X
eISSN
1573-7497
DOI
10.1007/s10489-022-04129-4
Publisher site
See Article on Publisher Site

Abstract

Recently, the deep convolutional neural network (CNN) has been widely explored in single image super-resolution (SISR) and achieves excellent performance. However, most of the existing CNN-based SISR methods mainly focus on wider or deeper architecture design, ignoring the internal dependencies between the features of different layers in the network and the intrinsic statistical properties of the feature maps. It hinders the maximization of the network representation power. We propose a ternary attention mechanism network (TAN) for effective feature extraction and feature correlation learning to address this issue. Specifically, we introduced a layer attention mechanism(LAM) to fully use the features generated by each layer of the network. Furthermore, we present a spatial attention mechanism(SAM) that uses the internal statistical characteristics of the features to enhance themself. Finally, we design a new channel attention mechanism(CAM) to ensure the feature diversity in channel dimensions. Extensive experiments show that our TAN achieves better both quantitative metrics and visual quality compared with state-of-the-art methods.

Journal

Applied IntelligenceSpringer Journals

Published: Jun 1, 2023

Keywords: Super-resolution; Attention mechanism; Deep convolutional neural network

There are no references for this article.