Citation: | CAO D, HU C, WANG C B, et al. Quality Improvement of CBCT-Synthesized CT Images Based on Improved CycleGAN[J]. CT Theory and Applications, xxxx, x(x): 1-10. DOI: 10.15953/j.ctta.2024.152. (in Chinese). |
Objective: This study proposes an unsupervised learning model, ViTD-CycleGAN, based on an improved CycleGAN to synthesize computed tomography (CT) images from cone-beam computed tomography (CBCT) images. Our aim is to enhance the quality and realism of synthetic CT (sCT) images. Methods: ViTD-CycleGAN incorporates a U-Net framework based on a vision Transformer (ViT) and depth-wise convolution (DW) into its generator, where the self-attention mechanism of the Transformer is leveraged to extract and preserve crucial features and detailed information. Additionally, a gradient penalty and pixel-wise loss function are introduced to enhance the stability of the model training and image consistency. Results: Quantitative evaluation metrics (MAE, PSNR, and SSIM) for head and neck as well as chest datasets indicate the superior performance of the proposed model compared with existing unsupervised learning methods. Ablation experiments show that the DW significantly improved the model performance. Visual-display analysis confirms that the sCT images generated using the ViTD-CycleGAN exhibit higher image quality and realism. Conclusion: Compared with other unsupervised learning methods, the proposed method can improve the quality of CBCT-synthesized CT images and thus offer potential clinical application value.
[1] |
JAFFRAY D A. Image-guided radiotherapy: From current concept to future perspectives[J]. Nature Reviews Clinical Oncology, 2012, 9(12): 688-699. DOI: 10.1038/nrclinonc.2012.194.
|
[2] |
张帅楠, 田龙, 赵鑫. 前列腺癌图像引导放疗中锥形束CT为合成CT提供质量保证的可行性研究[J]. 医疗卫生装备, 2023, 44(8): 50-54. DOI: 10.19745/j.1003-8868.2023161.
ZHANG S N, TIAN L, ZHAO X. Feasibility study of cone-beam CT for quality assurance of synthetic CT in image-guided radiotherapy for prostate cancer[J]. Chinese Medical Equipment Journal, 2023, 44(8): 50-54. DOI: 10.19745/j.1003-8868.2023161.
|
[3] |
LIU Y, LEI Y, WANG T, et al. MRI-based treatment planning for liver stereotactic body radiotherapy: Validation of a deep learning-based synthetic CT generation method[J]. The British Journal of Radiology, 2019, 92(1100): 20190067. DOI: 10.1259/bjr.20190067.
|
[4] |
冉雪琪, 李建锋, 曹绍艾. 探讨低剂量口腔CBCT检查中耳鼻喉相关组织的表现[J]. CT理论与应用研究, 2022, 31(3): 392-398. DOI: 10.15953/j.ctta.2021.035.
RAN X Q, LI J F, CAO S A. Analysis of the manifestations of ENT in low-dose oral CBCT examination[J]. CT Theory and Applications, 2022, 31(3): 392-398. DOI: 10.15953/j.ctta.2021.035.
|
[5] |
HUNTER A K, MCDAVID W D. Characterization and correction of cupping effect artifacts in cone beam CT. Dentomaxillofacial Radiol, 2012, 41(3): 217. DOI: 10.1259/dmfr/19015946.
|
[6] |
LONG J, SHELHAMER E, DARRELL T. Fully convolutional networks for semantic segmentation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 3431-3440. DOI: 10.1109/CVPR.2015.7298965.
|
[7] |
CHEN S, QIN A, ZHOU D, et al. U-net-generated synthetic CT images for magnetic resonance imaging-only prostate intensity-modulated radiation therapy treatment planning[J]. Medical Physics, 2018, 45(12): 5659-5665. DOI: 10.1002/mp.13247.
|
[8] |
ZHANG S, HUANG W, WANG H. Lesion detection of computed tomography and magnetic resonance imaging image based on fully convolutional networks[J]. Journal of Medical Imaging and Health Informatics, 2018, 8(9): 1819-1825. DOI: 10.1166/jmihi.2018.2565.
|
[9] |
KIDA S, NAKAMOTO T, NAKANO M, et al. Cone beam computed tomography image quality improvement using a deep convolutional neural network[J]. Cureus, 2018, 10(4). DOI: 10.7759/cureus.2548.
|
[10] |
LI Y, ZHU J, LIU Z, et al. A preliminary study of using a deep convolution neural network to generate synthesized CT images based on CBCT for adaptive radiotherapy of nasopharyngeal carcinoma[J]. Physics in Medicine & Biology, 2019, 64(14): 145010. DOI: 10.1088/1361-6560/ab2770.
|
[11] |
JAMES G, WITTEN D, HASTIE T, et al. Unsupervised learning[M]//An Introduction to Statistical Learning: with Applications in Python. Cham: Springer International Publishing, 2023: 503-556. DOI: 10.1007/978-3-031-38747-0_12.
|
[12] |
SANDFORT V, YAN K, PICKHARDT P J, et al. Data augmentation using generative adversarial networks (CycleGAN) to improve generalizability in CT segmentation tasks[J]. Scientific reports, 2019, 9(1): 16884. DOI: 10.1038/s41598-019-52737-x.
|
[13] |
LIU Y, LEI Y, WANG T, et al. CBCT-based synthetic CT generation using deep-attention CycleGAN for pancreatic adaptive radiotherapy[J]. Medical Physics, 2020, 47(6): 2472-2483. DOI: 10.1002/mp.14121.
|
[14] |
HARMS J, LEI Y, WANG T, et al. Paired cycle-GAN-based image correction for quantitative cone-beam computed tomography[J]. Medical Physics, 2019, 46(9): 3998-4009. DOI: 10.1002/mp.13656.
|
[15] |
ZENG G, ZHENG G. Hybrid generative adversarial networks for deep MR to CT synthesis using unpaired data[C]//Medical Image Computing and Computer Assisted Intervention-MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13-17, 2019, Proceedings, Part IV 22. Springer International Publishing, 2019: 759-767. DOI: 10.1007/978-3-030-32251-9_83.
|
[16] |
TORBUNOV D, HUANG Y, YU H, et al. Uvcgan: Unet vision transformer cycle-consistent gan for unpaired image-to-image translation[C]//Proceedings of the IEEE/CVF winter conference on applications of computer vision. 2023: 702-712. DOI: 10.48550/arXiv.2203.02557.
|
[17] |
TORBUNOV D, HUANG Y, TSENG H H, et al. UVCGAN v2: An Improved Cycle-consistent GAN for Unpaired Image-to-Image Translation[J]. arXiv preprint arXiv: 2303.16280, 2023. DOI: 10.48550/arXiv.2303.16280.
|
[18] |
ZHENG M, LI T, ZHU R, et al. Conditional Wasserstein generative adversarial network-gradient penalty-based approach to alleviating imbalanced data classification[J]. Information Sciences, 2020, 512: 1009-1023. DOI: 10.1016/j.ins.2019.10.014.
|
[19] |
YU Z, LI X, SHI J, et al. Revisiting pixel-wise supervision for face anti-spoofing[J]. IEEE Transactions on Biometrics, Behavior, and Identity Science, 2021, 3(3): 285-295. DOI: 10.1109/TBIOM.2021.3065526.
|
[20] |
AVANTS B B, TUSTISON N J, SONG G, et al. A reproducible evaluation of ANTs similarity metric performance in brain image registration[J]. Neuroimage, 2011, 54(3): 2033-2044. DOI: 10.1016/j.neuroimage.2010.09.025.
|
[21] |
MOLLINK J, VAN BAARSEN KM, DEDEREN PJ, et al. Dentatorubrothalamic tract localization with postmortem MR diffusion tractography compared to histological 3D reconstruction[J]. Brain Struct Funct, 2016, 221(7): 3487-3501. DOI: 10.1007/s00429-015-1115-7.
|
[22] |
樊雪林, 文昱齐, 乔志伟. 基于 Transformer 增强型 U-net的CT图像稀疏重建与伪影抑制[J]. CT理论与应用研究(中英文), 2024, 33(1): 1-12. DOI: 10.15953/j.ctta.2023.183.
FAN X L, WEN Y Q, QIAO Z W. Sparse reconstruction of computed tomography images with transformer enhanced U-net[J]. CT Theory and Applications, 2024, 33(1): 1-12. DOI: 10.15953/j.ctta.2023.183.
|
[23] |
JIQING C, DEPENG W, TENG L, et al. All-weather road drivable area segmentation method based on CycleGAN[J]. The Visual Computer, 2023, 39(10): 5135-5151. DOI: 10.1007/s00371-022-02650-8.
|
[24] |
MAO X, QI G, CHEN Y, et al. Towards robust vision transformer[C]//Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition. 2022: 12042-12051. DOI: 10.48550/arXiv.2105.07926.
|
[25] |
BANNER R, NAHSHAN Y, SOUDRY D. Post training 4-bit quantization of convolutional networks for rapid-deployment[J]. Advances in Neural Information Processing Systems, 2019, 32.
|