Abstract
The necessity for high-quality images obtained with limited radiation dose has led to the widespread use of super-resolution (SR) technology in medical imaging, particularly in conjunction with low-dose computed tomography (CT) and low-field MRI technology. Nevertheless, SR remains a formidable challenge due to its inherent complexity and the stringent requirements for image quality. In this study, the study suggests mixed residual attention super-resolution generative adversarial network (MRA-SRGAN), a GAN-based model for medical image super-resolution (MISR), designed to address the challenges of high-quality imaging with limited radiation in low-dose CT and low-field MRI. By integrating residual channel attention groups (RCAGs) and convolutional block attention modules (CBAMs) in the generator, and depthwise separable convolutions (DS-Convs) in the discriminator, our model enhances local detail recovery while reducing computational complexity. On the LUNA 16 and Covid19-CT-Scans datasets, MRA-SRGAN achieves peak signal-to-noise ratios (PSNRs) of 32.121 dB and 26.513 dB, and structural similarity indices (SSIMs) of 0.990 and 0.981, outperforming traditional methods. This approach effectively preserves high-frequency information, with future work aimed at further optimizing attention mechanisms and improving clinical applicability.