ISSN 1004-4140
CN 11-3017/P

基于CNN和Transformer耦合网络的低剂量CT图像重建方法

乔一瑜, 乔志伟

乔一瑜, 乔志伟. 基于CNN和Transformer耦合网络的低剂量CT图像重建方法[J]. CT理论与应用研究, 2022, 31(6): 697-707. DOI: 10.15953/j.ctta.2022.114.
引用本文: 乔一瑜, 乔志伟. 基于CNN和Transformer耦合网络的低剂量CT图像重建方法[J]. CT理论与应用研究, 2022, 31(6): 697-707. DOI: 10.15953/j.ctta.2022.114.
QIAO Y Y, QIAO Z W. Low-dose CT image reconstruction method based on CNN and transformer coupling network[J]. CT Theory and Applications, 2022, 31(6): 697-707. DOI: 10.15953/j.ctta.2022.114. (in Chinese).
Citation: QIAO Y Y, QIAO Z W. Low-dose CT image reconstruction method based on CNN and transformer coupling network[J]. CT Theory and Applications, 2022, 31(6): 697-707. DOI: 10.15953/j.ctta.2022.114. (in Chinese).

基于CNN和Transformer耦合网络的低剂量CT图像重建方法

基金项目: 国家自然科学基金面上项目(模型与数据耦合驱动的快速四维EPRI肿瘤氧成像(62071281));中央引导地方科技发展资金项目(新型TV和学习先验联合约束的快速四维EPRI成像方法(YDZJSX2021A003));山西省重点研发计划(电子顺磁共振成像(EPRI)中美联合实验室平台建设(201803D421012));山西省留学人员科技活动择优资助项目(基于压缩感知的四维EPRI成像方法研究(2018-172));山西省回国留学人员科研资助项目(基于新型四维TV正则机理的快速EPRI肿瘤氧成像方法研究(2020-008))。
详细信息
    作者简介:

    乔一瑜: 女,山西大学电子信息专业硕士研究生,主要从事医学图像重建、图像处理等方面的研究,E-mail:908227491@qq.com

    乔志伟: 男,博士,山西大学计算机与信息技术学院教授、博士生导师,主要从事医学图像重建、信号处理、大规模最优化等方面的研究,E-mail:zqiao@sxu.edu.cn

    通讯作者:

    乔志伟*,

  • 中图分类号: O  242;TP  391.41

Low-dose CT Image Reconstruction Method Based on CNN and Transformer Coupling Network

  • 摘要: 在投影角度个数不变的情况下,降低每个角度下的射线剂量,是一种有效的低剂量CT实现方式,然而,这会使得重建图像的噪声较大。当前,以卷积神经网络(CNN)为代表的深度学习图像去噪方法已经成为低剂量CT图像去噪的经典方法。受Transformer在计算机视觉任务中展现的良好性能的启发,本文提出一种CNN和Transformer耦合的网络(CTC),以进一步提高CT图像去噪的性能。CTC网络综合运用CNN的局部信息关联能力和Transformer的全局信息捕捉能力,构建8个由CNN部件和一种改进的Transformer部件构成的核心网络块,并基于残差连接机制和信息复用机制将之互联。与现有4种去噪网络比较,CTC网络去噪能力更强,可以实现高精度低剂量CT图像重建。
    Abstract: Under the condition that the number of projection angles is constant, reducing the radiation dose under each angle is an effective way to realize low-dose CT. However, the reconstructed images obtained through this method can be very noisy. At present, the deep learning image denoising method represented by convolutional neural networks (CNN) has become a classical method for low-dose CT image denoising. Inspired by the good performance of transformer in computer vision tasks, this paper proposes a CNN transformer coupling network (CTC) to further improve the performance of CT image denoising. CTC network makes comprehensive use of local information association ability of CNN and global information capture ability of transformer, constructs eight core network blocks composed of CNN components and an improved transformer component, which are interconnected based on residual connection mechanism and information reuse mechanism. Compared with the existing four denoising networks, CTC network demonstrate better denoising ability and can realize high-precision low-dose CT image reconstruction.
  • 图  1   CNN和Transformer耦合模块示意图

    Figure  1.   CNN-Transformer-Coupling block

    图  2   图像边缘增强模块示意图

    Figure  2.   Image edge enhancement block

    图  3   CTC网络结构示意图

    Figure  3.   The architecture diagram of the CTC network

    图  4   肺部去噪效果图(显示窗口为[0,1])

    Figure  4.   Lung denoising results (display window: [0,1])

    图  5   肺部去噪效果局部放大图(显示窗口为[0,1])

    Figure  5.   Local amplification of lung denoising effect (display window: [0,1])

    图  6   腹部去噪效果图(显示窗口为[0,1])

    Figure  6.   Abdomen denoising results (display window: [0,1])

    图  7   腹部去噪效果局部放大图(显示窗口为[0,1])

    Figure  7.   Local amplification of abdomen denoising effect (display window: [0,1])

    图  8   CTCB数量对实验结果的影响示意图

    Figure  8.   Influence of CTCB quantity on experimental results

    图  9   对比实验效果图

    Figure  9.   Ablation Study results

    表  1   低剂量CT图像重建实验对比结果

    Table  1   Experimental comparison of low dose CT image reconstruction

    指标 DNCNNRED-CNNBRDNetUformerCTCCTC+
    R31.518 32.325 33.232 33.223 33.574 33.695
    S 0.9410.9440.9540.9470.9570.958
    E0.0280.0250.0230.0230.0220.022
    训练时长/h3.51 9.00 10.12 14.43 13.33 21.66
    参数量/G4.32 6.14 7.83 10.63 9.98 11.62
    下载: 导出CSV

    表  2   图像边缘增强优化器的影响实验结果

    Table  2   Effect of image edge enhancement block on experimental results

    网络模型评估指标
    RSE
    CTC-33.3100.9540.023
    CTC 33.5740.9570.022
    下载: 导出CSV
  • [1]

    BRENNER D J, HALL E J. Computed tomography: An increasing source of radiation exposure[J]. New England Journal of Medicine, 2013, 357(22): 2277−2284.

    [2]

    KOLTAI P J, WOOD G W. Three dimensional CT reconstruction for the evaluation and surgical planning of facial fractures[J]. Otolaryngology-Head and Neck Surgery, 1986, 95(1): 10−15. doi: 10.1177/019459988609500103

    [3]

    MING C, LI L, CHEN Z, et al. A few-view reweighted sparsity hunting (FRESH) method for CT image reconstruction[J]. Journal of X-ray Science and Technology, 2013, 21(2): 161−176. doi: 10.3233/XST-130370

    [4]

    KANG E, MIN J, YE J C. A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction[J]. Medical Physics, 2017, 44(10): e360−e375. doi: 10.1002/mp.12344

    [5]

    PRASOON A, PETERSON K, IGEL C, et al. Deep feature learning for knee cartilage segmentation using a triplanar convolutional neural network[C]//International Conference on Medical Image Computing and Computer-assisted Intervention. Springer, Berlin, Heidelberg, 2013: 246-253.

    [6]

    YANG Q, YAN P, ZHANG Y, et al. Low-dose CT image denoising using a generative adversarial network with wasserstein distance and perceptual loss[J]. IEEE Transactions on Medical Imaging, 2018: 1348−1357.

    [7]

    HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016: 770-778.

    [8]

    YANG D, SUN J. BM3 D-Net: A convolutional neural network for transform-domain collaborative filtering[J]. IEEE Signal Processing Letters, 2017, 25(1): 55−59.

    [9]

    SINGH R, WU W, WANG G, et al. Artificial intelligence in image reconstruction: The change is here[J]. Physica Medica, 2020, 79: 113−125. doi: 10.1016/j.ejmp.2020.11.012

    [10]

    ZHANG K, ZUO W, CHEN Y, et al. Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising[J]. IEEE Transactions on Image Processing, 2016, 26(7): 3142−3155.

    [11]

    CHEN H, ZHANG Y, KALRA M K, et al. Low-dose CT with a residual encoder-decoder convolutional neural network (RED-CNN)[J]. IEEE Transactions on Medical Imaging, 2017, 36(99): 2524−2535.

    [12]

    ZHANG K, ZUO W, ZHANG L. FFDNet: Toward a fast and flexible solution for CNN based image denoising[J]. IEEE Transactions on Image Processing, 2018.

    [13]

    ANWAR S, BARNES N. Real image denoising with feature attention[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 3155-3164.

    [14]

    TIAN C, XU Y, ZUO W. Image denoising using deep CNN with batch renormalization[J]. Neural Networks, 2020, 121: 461−473. doi: 10.1016/j.neunet.2019.08.022

    [15]

    MA Y, WEI B, FENG P, et al. Low-dose CT image denoising using a generative adversarial network with a hybrid loss function for noise learning[J]. IEEE Access, 2020, 8. DOI: 10.1109/ACCESS.2020.2986388.

    [16]

    LIANG J, CAO J, SUN G, et al. Swinir: Image restoration using swin transformer[C]// Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 1833-1844.

    [17]

    YUAN K, GUO S, LIU Z, et al. Incorporating convolution designs into visual transformers[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 579-588.

    [18]

    VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[J]. Advances in Neural Information Processing Systems, 2017. https://doi.org/10.48550/arXiv.1706.03762.

    [19]

    DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16×16 words: Transformers for image recognition at scale[J]. arXiv Preprint arXiv: 2010.11929, 2020.

    [20]

    LIU Z, LIN Y, CAO Y, et al. Swin transformer: Hierarchical vision transformer using shifted windows[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 10012-10022.

    [21]

    WANG Z, CUN X, BAO J, et al. Uformer: A general U-shaped transformer for image restoration[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 17683-17693.

    [22]

    HAN K, WANG Y, CHEN H, et al. A survey on vision transformer[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.

    [23]

    HUANG G, LIU Z, Van der MAATEN L, et al. Densely connected convolutional networks[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 4700-4708.

    [24]

    ZHANG J Y, YAN C, HUANG X X. Edge detection of images based on improved Sobel operator and genetic algorithms[C]//2009 International Conference on Image Analysis and Signal Processing. IEEE, 2009: 31-35.

    [25]

    TOLSTIKHIN I O, HOULSBY N, KOLESNIKOV A, et al. MLP-mixer: An all-MLP architecture for vision[J]. Advances in Neural Information Processing Systems, 2021: 34.

  • 期刊类型引用(1)

    1. 贾红敏,卫宏江,赵瑾璐,杨正汉. CT增强长时间延迟扫描在腹膜假黏液瘤PCI评分中的价值研究. CT理论与应用研究(中英文). 2025(02): 311-318 . 百度学术

    其他类型引用(0)

图(9)  /  表(2)
计量
  • 文章访问数:  1586
  • HTML全文浏览量:  314
  • PDF下载量:  348
  • 被引次数: 1
出版历程
  • 收稿日期:  2022-06-11
  • 修回日期:  2022-07-25
  • 录用日期:  2022-07-26
  • 网络出版日期:  2022-08-11
  • 发布日期:  2022-11-02

目录

    /

    返回文章
    返回
    x 关闭 永久关闭