TY - GEN
T1 - SAR Image Despeckling with Residual-in-Residual Dense Generative Adversarial Network
AU - Bai, Yunpeng
AU - Xiao, Yayuan
AU - Hou, Xuan
AU - Li, Ying
AU - Shang, Changjing
AU - Shen, Qiang
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Deep convolutional neural networks have delivered remarkable aptitude in performing Synthetic Aperture Radar (SAR) image speckle removal tasks. Such approaches are nevertheless constrained in balancing speckle removal and preservation of spatial information, particularly with respect to strong speckle noise. In this paper, a novel residual-in-residual dense generative adversarial network is proposed to effectively suppress SAR image speckle while retaining rich spatial information. A despeckling sub-network composed of residual-in-residual dense blocks with an encoder-decoder structure is devised to learn end-to-end mapping of noisy images onto noise-free images, where the combination of residual-in-residual structure and dense connection significantly enhances the feature representation capability. In addition, a discriminator sub-network with a fully convolutional structure is introduced, and the adversarial learning strategy is adopted to continuously refine the quality of despeckled results. Systematic experimental results on simulated and real SAR images demonstrate that the novel approach offers superior performance in both quantitative and visual evaluation as compared to state-of-the-art methods.
AB - Deep convolutional neural networks have delivered remarkable aptitude in performing Synthetic Aperture Radar (SAR) image speckle removal tasks. Such approaches are nevertheless constrained in balancing speckle removal and preservation of spatial information, particularly with respect to strong speckle noise. In this paper, a novel residual-in-residual dense generative adversarial network is proposed to effectively suppress SAR image speckle while retaining rich spatial information. A despeckling sub-network composed of residual-in-residual dense blocks with an encoder-decoder structure is devised to learn end-to-end mapping of noisy images onto noise-free images, where the combination of residual-in-residual structure and dense connection significantly enhances the feature representation capability. In addition, a discriminator sub-network with a fully convolutional structure is introduced, and the adversarial learning strategy is adopted to continuously refine the quality of despeckled results. Systematic experimental results on simulated and real SAR images demonstrate that the novel approach offers superior performance in both quantitative and visual evaluation as compared to state-of-the-art methods.
KW - dense connection
KW - despeckling
KW - generative adversarial network
KW - residual learning
KW - SAR
UR - http://www.scopus.com/inward/record.url?scp=85180273920&partnerID=8YFLogxK
U2 - 10.1109/ICASSP49357.2023.10096355
DO - 10.1109/ICASSP49357.2023.10096355
M3 - Conference Proceeding (Non-Journal item)
AN - SCOPUS:85180273920
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
BT - ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
T2 - 48th IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2023
Y2 - 4 June 2023 through 10 June 2023
ER -