Low-dose CT Image Reconstruction Method Based on CNN and Transformer Coupling Network
-
摘要: 在投影角度个数不变的情况下,降低每个角度下的射线剂量,是一种有效的低剂量CT实现方式,然而,这会使得重建图像的噪声较大。当前,以卷积神经网络(CNN)为代表的深度学习图像去噪方法已经成为低剂量CT图像去噪的经典方法。受Transformer在计算机视觉任务中展现的良好性能的启发,本文提出一种CNN和Transformer耦合的网络(CTC),以进一步提高CT图像去噪的性能。CTC网络综合运用CNN的局部信息关联能力和Transformer的全局信息捕捉能力,构建8个由CNN部件和一种改进的Transformer部件构成的核心网络块,并基于残差连接机制和信息复用机制将之互联。与现有4种去噪网络比较,CTC网络去噪能力更强,可以实现高精度低剂量CT图像重建。Abstract: Under the condition that the number of projection angles is constant, reducing the radiation dose under each angle is an effective way to realize low-dose CT. However, the reconstructed images obtained through this method can be very noisy. At present, the deep learning image denoising method represented by convolutional neural networks (CNN) has become a classical method for low-dose CT image denoising. Inspired by the good performance of transformer in computer vision tasks, this paper proposes a CNN transformer coupling network (CTC) to further improve the performance of CT image denoising. CTC network makes comprehensive use of local information association ability of CNN and global information capture ability of transformer, constructs eight core network blocks composed of CNN components and an improved transformer component, which are interconnected based on residual connection mechanism and information reuse mechanism. Compared with the existing four denoising networks, CTC network demonstrate better denoising ability and can realize high-precision low-dose CT image reconstruction.
-
-
表 1 低剂量CT图像重建实验对比结果
Table 1 Experimental comparison of low dose CT image reconstruction
指标 DNCNN RED-CNN BRDNet Uformer CTC CTC+ R 31.518 32.325 33.232 33.223 33.574 33.695 S 0.941 0.944 0.954 0.947 0.957 0.958 E 0.028 0.025 0.023 0.023 0.022 0.022 训练时长/h 3.51 9.00 10.12 14.43 13.33 21.66 参数量/G 4.32 6.14 7.83 10.63 9.98 11.62 表 2 图像边缘增强优化器的影响实验结果
Table 2 Effect of image edge enhancement block on experimental results
网络模型 评估指标 R S E CTC- 33.310 0.954 0.023 CTC 33.574 0.957 0.022 -
[1] BRENNER D J, HALL E J. Computed tomography: An increasing source of radiation exposure[J]. New England Journal of Medicine, 2013, 357(22): 2277−2284.
[2] KOLTAI P J, WOOD G W. Three dimensional CT reconstruction for the evaluation and surgical planning of facial fractures[J]. Otolaryngology-Head and Neck Surgery, 1986, 95(1): 10−15. doi: 10.1177/019459988609500103
[3] MING C, LI L, CHEN Z, et al. A few-view reweighted sparsity hunting (FRESH) method for CT image reconstruction[J]. Journal of X-ray Science and Technology, 2013, 21(2): 161−176. doi: 10.3233/XST-130370
[4] KANG E, MIN J, YE J C. A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction[J]. Medical Physics, 2017, 44(10): e360−e375. doi: 10.1002/mp.12344
[5] PRASOON A, PETERSON K, IGEL C, et al. Deep feature learning for knee cartilage segmentation using a triplanar convolutional neural network[C]//International Conference on Medical Image Computing and Computer-assisted Intervention. Springer, Berlin, Heidelberg, 2013: 246-253.
[6] YANG Q, YAN P, ZHANG Y, et al. Low-dose CT image denoising using a generative adversarial network with wasserstein distance and perceptual loss[J]. IEEE Transactions on Medical Imaging, 2018: 1348−1357.
[7] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016: 770-778.
[8] YANG D, SUN J. BM3 D-Net: A convolutional neural network for transform-domain collaborative filtering[J]. IEEE Signal Processing Letters, 2017, 25(1): 55−59.
[9] SINGH R, WU W, WANG G, et al. Artificial intelligence in image reconstruction: The change is here[J]. Physica Medica, 2020, 79: 113−125. doi: 10.1016/j.ejmp.2020.11.012
[10] ZHANG K, ZUO W, CHEN Y, et al. Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising[J]. IEEE Transactions on Image Processing, 2016, 26(7): 3142−3155.
[11] CHEN H, ZHANG Y, KALRA M K, et al. Low-dose CT with a residual encoder-decoder convolutional neural network (RED-CNN)[J]. IEEE Transactions on Medical Imaging, 2017, 36(99): 2524−2535.
[12] ZHANG K, ZUO W, ZHANG L. FFDNet: Toward a fast and flexible solution for CNN based image denoising[J]. IEEE Transactions on Image Processing, 2018.
[13] ANWAR S, BARNES N. Real image denoising with feature attention[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 3155-3164.
[14] TIAN C, XU Y, ZUO W. Image denoising using deep CNN with batch renormalization[J]. Neural Networks, 2020, 121: 461−473. doi: 10.1016/j.neunet.2019.08.022
[15] MA Y, WEI B, FENG P, et al. Low-dose CT image denoising using a generative adversarial network with a hybrid loss function for noise learning[J]. IEEE Access, 2020, 8. DOI: 10.1109/ACCESS.2020.2986388.
[16] LIANG J, CAO J, SUN G, et al. Swinir: Image restoration using swin transformer[C]// Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 1833-1844.
[17] YUAN K, GUO S, LIU Z, et al. Incorporating convolution designs into visual transformers[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 579-588.
[18] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[J]. Advances in Neural Information Processing Systems, 2017. https://doi.org/10.48550/arXiv.1706.03762.
[19] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16×16 words: Transformers for image recognition at scale[J]. arXiv Preprint arXiv: 2010.11929, 2020.
[20] LIU Z, LIN Y, CAO Y, et al. Swin transformer: Hierarchical vision transformer using shifted windows[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 10012-10022.
[21] WANG Z, CUN X, BAO J, et al. Uformer: A general U-shaped transformer for image restoration[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 17683-17693.
[22] HAN K, WANG Y, CHEN H, et al. A survey on vision transformer[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
[23] HUANG G, LIU Z, Van der MAATEN L, et al. Densely connected convolutional networks[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 4700-4708.
[24] ZHANG J Y, YAN C, HUANG X X. Edge detection of images based on improved Sobel operator and genetic algorithms[C]//2009 International Conference on Image Analysis and Signal Processing. IEEE, 2009: 31-35.
[25] TOLSTIKHIN I O, HOULSBY N, KOLESNIKOV A, et al. MLP-mixer: An all-MLP architecture for vision[J]. Advances in Neural Information Processing Systems, 2021: 34.
-
期刊类型引用(1)
1. 贾红敏,卫宏江,赵瑾璐,杨正汉. CT增强长时间延迟扫描在腹膜假黏液瘤PCI评分中的价值研究. CT理论与应用研究(中英文). 2025(02): 311-318 . 百度学术
其他类型引用(0)