Advances in Research on Low-dose CT Imaging Algorithm Based on Deep Learning
-
摘要:
计算机断层扫描成像(CT)技术具有成像速度快分辨率高的优点,广泛应用于医学临床诊断中。然而,提高剂量辐射会引发人体组织器官受损,降低剂量又会造成成像质量严重下降。为解决上述矛盾,在确保成像质量满足临床诊断需求的条件下,研究如何最大程度地降低X射线辐射对人体造成的伤害,已成为低剂量CT成像技术的研究热点。近年来,在人工智能领域深度学习方法快速发展,已广泛应用于图像处理、模式识别、信号处理等领域。与此同时,大数据驱动下的深度学习方法在LDCT成像领域的应用也有了长足的发展。本文从CT成像的过程、低剂量CT噪声建模以及成像算法的设计3方面,介绍近年来国内外低剂量CT成像算法的发展,尤其对深度学习领域的成像算法进行阐述与分析,并对LDCT图像成像领域未来的发展进行展望。
Abstract:Computed tomography (CT) is widely used in clinical diagnosis because of its fast imaging speed and high resolution. However, higher doses of radiation will cause damages to human tissues and organs, while lower doses will lead to serious deterioration of imaging quality. In order to solve the above contradiction, researchers have focused on the low-dose CT imaging technology to study how to reduce the harm caused by radiation to the human body to the greatest extent under the condition of ensuring the imaging quality to meet the needs of clinical diagnosis. In recent years, deep learning has developed rapidly in the field of artificial intelligence, and has been widely used in image processing, pattern recognition, signal processing fields. Driven by big data, LDCT imaging algorithms based on deep learning have made great progress. This paper studies the development of low-dose CT imaging algorithms in recent years in terms of three aspects: the process of CT imaging, the noise modeling of low-dose CT, and the design of imaging algorithms. In particular, the imaging algorithms in the field of deep learning are systematically elaborated and analyzed. Finally, future developments in the field of LDCT image artifact suppression are also prospected.
-
Keywords:
- deep learning /
- low dose CT /
- artifact suppression /
- noise modeling
-
冠状动脉CT血管成像(coronary CT angiography,CCTA)作为临床诊断、筛查冠状动脉疾病[1-2]及预测心血管事件[3]的无创影像学检查手段,并在临床中广泛应用。CCTA图像质量影响冠状动脉狭窄程度评估,CCTA检查的辐射剂量及碘对比剂不良反应亦倍受关注[4-5]。本研究回顾性分析影响第三代双源CT检查的冠状动脉血管成像客观图像质量因素,协助技师更加准确、合理的选择扫描参数及注射方案,旨在提高患者检查成功率及降低辐射剂量及碘摄入量。
1. 资料与方法
1.1 一般资料
回顾性连续收集自2020年1月至2021年6月在本院行第三代双源CT冠状动脉CT血管成像患者
1035 例。纳入标准:①患者具有完整基本临床信息、扫描参数;②钙化积分≤1000 患者。排除标准:①肾功能(肌酐 > 1.7 mg/dL)不全患者;②碘对比剂过敏的患者;③图像质量主观评价标准差[1]的患者,如冠状动脉节段中的40%(15个节段中的6个节段)有伪影;④既往有冠状动脉腔内成形术、冠状动脉支架植入术、冠状动脉搭桥术、心脏瓣膜置换术及起搏器、除颤器植入等手术史。符合标准纳入研究对象共684例患者。本研究通过本院医学伦理委员会审查批准。1.2 冠状动脉CT血管成像方法
1.2.1 患者准备:
患者进行严格的呼吸训练;检查前均不使用控制心率药物;测量患者身高与体重;于受检者左手肘正中静脉预埋留置针。
1.2.2 检查方法:所有患者均行第三代双源CT(SIEMENS Force
双源CT)进行冠状动脉CTA成像,扫描范围自气管隆突下方1 cm至心脏膈面。监测层面为升主动脉起始部,阈值100 HU,达阈值后自动触发扫描。对比剂和生理盐水均用双筒高压注射器注入。扫描参数:回顾性或前瞻性心电门控,管电压70-120 KV,管电流参考值为280 mAs/rot,准值器192 mm×0.6 mm,层厚0.75 mm,旋转时间0.25 s;重建算法采用高级建模迭代重建(advanced modeled iterative reconstruction,ADMIRE),迭代强度3。
1.2.3 图像分析:
由两名具有3年以上从事心血管放射诊断医师测量左主干、左前降支近段、左回旋支近段及右冠状动脉近段CT值并取均匀值。感兴趣区(region of interest,ROI)面积大约所测血管管腔面积80%且避开血管壁边缘及钙化、非钙化斑块,每支血管测量三次取平均数。
1.2.4 观察分析因素:
根据血管强化程度,若四支血管的CT值均为400 HU~900 HU,将评价为客观图像质量好,若其中一支血管的CT值 < 400 HU或CT值 > 900 HU,则为客观图像质量差,CT值 < 400 HU作为客观图像质量差的依据来源Xu等[6]人研究结果,而冠状动脉CT值过高影响非钙化斑块的显示[7],因此本研究尝试将CT值 > 900 HU作为客观图像质量差。记录患者基本临床信息(包括年龄、性别、身高、体重、体重指数(body mass index,BMI)、心率、心律是否齐)、扫描方案(包括对比剂剂量、对比剂注射率、碘浓度、管电压、是否大螺距Flash扫描)和钙化积分;根据2016年中国超重/肥胖问题医学营养治疗专家共识按照BMI值(体重kg/身高的平方m2)将患者分为四组:偏瘦组(BMI < 18.5)、正常组(18.5≤BMI<24)、超重组 (24≤BMI<28)、肥胖组(BMI≥28)。将心率分为低心率组(低于60)、正常心率组(60~100)、高心率组(高于100)。将钙化积分分为四组(0、1~99、100~399、400~
1000 )。1.3 统计学方法
采用SPSS 20.0、MedCalc、GraphPad Prism 5及R语(3.5.2版)统计软件,P<0.05作为有统计学差异,符合正态分布采用t检验,不符合正态分布采用秩和检验、卡方或Fisher确切检验方法;将单因素分析中P<0.05的变量作为二元Logistic回归输入变量,建立Logistic回归预测模型,用MedCalc绘制受试者操作特征曲线(Receiver Operating Characteristic,ROC)曲线,并利用R语制作列线图模型。
2. 结果
2.1 影响客观图像质量的因素
在684例患者中,其中79例患者客观图像质量差,50例患者CT值<400 HU,29例患者CT值>900 HU;605例患客观图像质量好;患者的BMI(P<0.001)、碘浓度(P=0.001,P<0.05)、对比剂剂量(P=0.005,P<0.05)、对比剂注射速率(P=0.010,P<0.05)及管电压(P<0.001)在两组中差异具有统计学意义,患者的性别、年龄、心率、心律是否齐、身高、体重、钙化积分及是否为大螺距Flash扫描在两组间差异无统计学意义,具体结果见表1。
表 1 客观图像质量差与客观图像质量好两组在患者基本临床信息及扫描参数中比较Table 1. Comparison of poor objective image quality and good objective image quality in terms of the basic clinical information and scanning parameters.客观图像质量差
(N=79)客观图像质量好
(N=605)$t/T/\chi^2 $ P 性别 3.607 0.058 女 43.0(54.4%) 261.0(43.1%) 男 36.0(45.6%) 344.0(56.9%) 年龄(岁) 62.5 ±12.5 62.4 ±12.7 0.073 0.942 钙化积分 3.136 0.371 0 49.0(62.0%) 321.0(53.1%) 1~99 17.0(21.5%) 142.0(23.5%) 100~399 7.0(8.9%) 92.0(15.2%) 400~ 1000 6.0(7.6%) 50.0(8.3%) 身高(cm) 160.0(138.0,185.0) 160.0(137.0,180.0) −0.403 0.687 体重(Kg) 65.3(29.0,95.0) 61.9(36.0,91.0) −1.920 0.055 BMI(kg/m2) 18.344 < 0.001 偏瘦 5.0(6.3%) 21.0(3.5%) 正常 26.0(32.9%) 274.0(45.3%) 超重 25.0(31.6%) 242.0(40.0%) 肥胖 23.0(29.1%) 68.0(11.2%) 心率(次/分钟) 0.261 0.877 <60 8.0(10.1%) 51.0(8.4%) 60~100 64.0(81.0%) 501.0(82.8%) >100 7.0(8.9%) 53.0(8.8%) 心律 0.707 0.401 窦性律齐 71.0(89.9%) 560(92.6%) 心律不齐 8.0(10.1%) 45.0(7.4%) 碘浓度(mgI/ml) 370.0(300.0, 400.0) 370.0(300.0, 400.0) −3.180 0.001 对比剂剂量(ml) 45.0 (21.0, 65.0) 40.0 (4.0, 65.0) −2.801 0.005 注射速率(ml/s) 4.0 (3.0, 5.0) 4.00 (3.0, 5.0) −2.568 0.010 管电压(Kv) 80.0 (70.0, 120.0) 70.0 (70.0, 120.0) −5.981 < 0.001 扫描方式 1.141 0.707 非Flash 66.0(83.5%) 495.0(81.8%) Flash 13.0(16.5%) 110.0(18.2%) 2.2 建立预测客观图像质量好的模型
将单因素分析中患者的BMI、碘浓度、对比剂剂量、对比剂注射速率及管电压作为二元Logistic输入变量,其中BMI、碘浓度及管电压是预测客观图像质量好的独立危险因素,结果详见表2。建立二元Logistic回归模型如下:
表 2 影响客观图像质量的独立危险因素Table 2. Independent risk factors affecting the objective image qualityβ S.E, Wals P OR 95%CI BMI 参考 8.066 0.045 BMI(1) −1.500 0.656 5.235 0.022 0.223 0.062-0.806 BMI(2) −0.506 0.454 1.243 0.265 0.603 0.248-1.467 BMI(3) 0.046 0.4 0.013 0.909 1.047 0.477-2.295 碘浓度 −0.009 0.005 3.969 0.046 0.991 0.982-1.000 对比剂剂量 −0.019 0.025 0.544 0.461 0.982 0.934-1.031 对比剂注射速率 −0.210 0.365 0.332 0.565 0.810 0.396-1.658 管电压 −0.096 0.016 35.002 0 0.908 0.88-0.938 常量 14.743 2.601 32.128 0 2527098.094 $$ \begin{split} &{\mathrm{P}}=1/1+{\mathrm{exp}}(-(14.743-1.5\times {\mathrm{BMI}}(1)-0.506\times\\ &{\mathrm{BMI}}(2)+0.046\times {\mathrm{BMI}}(3)+0\times {\mathrm{BMI}}(4)-0.009\times\\ &碘浓度-0.019\times 对比剂计量-0.210\times \\ &对比剂注射速率-0.096\times 管电压))。 \end{split} $$ (1) ROC曲线下面积(AUC)为0.757(95%CI:0.723-0.789),最佳临界值为
0.9168 ,敏感性61.32%,特异性83.54%,结果见图1。并利用R语绘制列线图模型见图2,图3是利用预测模型来预测一例患者冠状动脉CTA图像的客观质量。图3(a)~(d)患者男,身高1.7 m,体重75 Kg,BMI=33.33,肥胖型(BMI4)患者,行第三代双源CT冠状动脉血管成像,客观图像质量好(冠状动脉CT值约500 HU~600 HU),注射碘浓度400 Img/dL,注射速率4 mL/s,对比剂总量40 mL,管电压70 KV,经过二元Logistc回归预测模型P=1/1+exp(−(14.743−1.5×BMI(1)−0.506×BMI(2)+0.046×BMI(3)+0×BMI(4)−0.009×碘浓度−0.019×对比剂计量−0.210×对比剂注射速率−0.096×管电压)),P=
0.9439 >0.9168 ,客观图像质量好,经过列线图模型,总分约142分,对应的预测风险值高于95%,说明成为客观图像质量好的可能性大。2.3 管电压、碘浓度在不同BMI患者中冠状动脉血管CT值之间关系
本研究中患者采用70 kV~120 kV,随着管电压越高,冠状动脉血管CT值越低,若冠状动脉CT值相同,BMI指数越高所需管电压越高,见图4(a);本研究中患者采用300 mgI/ml~400 mgI/ml,碘浓度越高,冠状动脉血管CT值越高,若冠状动脉CT值相同,BMI指数越高所需的碘浓度越大,见图4(b)。
图4中(a)示不同BMI患者中管电压越高,CT值越低;在相同的冠状动脉CT值中,偏瘦患者需要的管电压低于正常患者低于超重患者低于肥胖患者,本研究中最低管电压为70 KV。(b)示不同BMI患者中碘浓度越高,冠状动脉CT值越高,在相同的冠状动脉CT值(小于700-799 HU)中,偏瘦患者需要的碘浓度低于正常患者低于超重患者低于肥胖患者。
3. 讨论
本研究根据Xu等[6]研究报道及CT值过高会影响非钙化斑块显示[7],将左冠状动脉主干及其三大分支近段的CT值400 HU~900 HU为客观图像质量好,其中任何一支血管CT值低于400 HU或高于900 HU为客观图像质量差, 而客观图像质量差的上限CT值目前鲜有相关文献报道,本研究结果显示CT值高于900 HU患者占总纳入对象约4.25%,占小概率事件,因此尝试将CT值高于900 HU作为客观图像质量差。
本研究结果显示BMI、碘浓度、管电压是影响客观图像质量的独立危险因素。随着BMI指数增加,X线穿透作用降低,冠状动脉血管CT值降低,如果按照统一剂量注射方式可能影响高BMI患者客观图像质量,而对低BMI患者而言对比剂剂量可能偏高[8];同时需要更高的管电压降低图像噪声,有研究采用深度学习图像重建可以提高图像质量[9]。在相同的对比剂剂量用量及注射速率,碘浓度越高,体内血管含碘量越高,CT值越高;高对比剂剂量可对肾脏造成不可逆性损伤及提高碘对比剂不良反应的发生,注射速率越快将会增加对比剂外渗的风险[10];因此在保障血管内单位体积内碘含量一定的情况下,针对不同患者,需要平衡对比剂剂量、对比剂速率及碘浓度之间的关系。既往大量文献[11-14]研究报道,利用低管电压、低对比剂总量扫描方案降低CT冠状动脉血管成像辐射剂量及碘摄入量。
随着CCTA临床广泛应用及计算机辅助诊断评估冠状动脉狭窄程度软件的应用增加[15],需要个体化扫描方案来保障图像质量及扫描成功率。本研究建立二元Logistic回归预测模型,将复杂的数学公式转化为列线图模型,列线图模型可以协助影像技师针对不同的患者设计更合理的扫描方案,提高扫描成功率及实现低辐射剂量及低碘的摄入量。
本研究局限性及不足:本研究非多中心研究、仅用第三代SIEMENS ForceCT设备,缺乏广泛的推广性;且未考虑患者心脏体积、心功等因素对于CCTA图像质量的影响;因此未来工作中仍需要进一步补充临床数据来完善标准化、个体化、精准化的扫描方案及更多中心样本来验证研究结果。
综上所述,计算机预测模型能够方便、简洁地制定个体化扫描方案,保障第三代双源CT冠状动脉血管成像图像质量前提下,降低患者辐射剂量及碘摄入量。
-
1 典型的CT成像算法在现有数据集上性能比较
方法 主要特点 优点 缺点 传统 CT 重建算法 FBP 解析类且最基础的重建算法 成像速度快,鲁棒性好 对稀疏角度 CT 重建质量不佳 TV-POCS 迭代重建类算法,使用了 TV 正则化项进行约束 研究对象为原始数据,不容易丢失信息,降噪效果优于 FBP 算法 降噪结果容易产生块状伪影,且部分重要的细微结构被平滑 传统 CT 后处理算法 BM3D 基于块匹配的后处理 细节保留能力优于 TV- POCS 降噪结果出现了模糊与失真 K-SVD 基于字典学习的后处理 算法运算时间较长,降噪结果中仍然存在部分伪影 基于深度学习的 CT 重建算法 iCT-Net 用 CNN 学习 FBP:投影数据扩展;滤波;反投影;求和 对稀疏角度、短扫描内部扫描 CT 重建效果良好 并未解决锥束 CT 重建问题,网络参数较多 LEARN 迭代展开类算法,对“fields of experts”进行展开,并用 CNN 进行学习 重建结果保留了更多的边缘与细节,比传统迭代算法更高效 鲁棒性较差,对正则化函数具有一定的限制 DRONE 双域残差优化网络 重建精度高 需要更多数据集进行训练 基于深度学习的 CT 后处理算法 RED-CNN CNN 网络,包括 5 层编码与 5 层解码,其中初始输入、第 2、4 层编码端特征通过残差连接并入相应解码端 降噪效果优于传统后处理算法,降噪结果中伪影残留量较少 降噪结果容易产生图像过平滑现象,丢失了一些细微信息,如血管等 WGAN-VGG GAN 网络,G:8 层 conv,D:6 层 conv,2 层 FC,损失函数:WGAN+VGG 训练稳定性较好,能够有效缓解图像过平滑问题 在抑制伪影的过程中易破坏图像原有结构,引入新的噪声 SACNN CNN 网络,同时采用自注意力与自编码模块 在伪影抑制与结构保留方面实现了较好的平衡 降噪结果中仍然存在部分噪声,细微结构产生了失真 表 1 典型的CT成像算法在现有数据集上性能比较
Table 1 Performance comparison of typical CT imaging algorithms on existing data sets
方法 主要特点 优点 缺点 传统 CT 重建算法 FBP 解析类且最基础的重建算法 成像速度快,鲁棒性好 对稀疏角度 CT 重建质量不佳 TV-POCS 迭代重建类算法,使用了 TV 正则化项进行约束 研究对象为原始数据,不容易丢失信息,降噪效果优于 FBP 算法 降噪结果容易产生块状伪影,且部分重要的细微结构被平滑 传统 CT 后处理算法 BM3D 基于块匹配的后处理 细节保留能力优于 TV- POCS 降噪结果出现了模糊与失真 K-SVD 基于字典学习的后处理 算法运算时间较长,降噪结果中仍然存在部分伪影 基于深度学习的 CT 重建算法 iCT-Net 用 CNN 学习 FBP:投影数据扩展;滤波;反投影;求和 对稀疏角度、短扫描内部扫描 CT 重建效果良好 并未解决锥束 CT 重建问题,网络参数较多 LEARN 迭代展开类算法,对“fields of experts”进行展开,并用 CNN 进行学习 重建结果保留了更多的边缘与细节,比传统迭代算法更高效 鲁棒性较差,对正则化函数具有一定的限制 DRONE 双域残差优化网络 重建精度高 需要更多数据集进行训练 基于深度学习的 CT 后处理算法 RED-CNN CNN 网络,包括 5 层编码与 5 层解码,其中初始输入、第 2、4 层编码端特征通过残差连接并入相应解码端 降噪效果优于传统后处理算法,降噪结果中伪影残留量较少 降噪结果容易产生图像过平滑现象,丢失了一些细微信息,如血管等 WGAN-VGG GAN 网络,G:8 层 conv,D:6 层 conv,2 层 FC,损失函数:WGAN+VGG 训练稳定性较好,能够有效缓解图像过平滑问题 在抑制伪影的过程中易破坏图像原有结构,引入新的噪声 SACNN CNN 网络,同时采用自注意力与自编码模块 在伪影抑制与结构保留方面实现了较好的平衡 降噪结果中仍然存在部分噪声,细微结构产生了失真 -
[1] BRENNER D J, HALL E J. Computed tomography: An increasing source of radiation exposure[J]. New England Journal of Medicine, 2007, 357(22): 2277−2284.
[2] SMITH-BINDMAN R, LIPSON J, MARCUS R, et al. Radiation dose associated with common computed tomography examinations and the associated lifetime attributable risk of cancer[J]. Archives of Internal Medicine, 2009, 169(22): 2078−2086. doi: 10.1001/archinternmed.2009.427
[3] HART D, WALL B F. UK population dose from medical X-ray examinations[J]. European Journal of Radiology, 2004, 50(3): 285−291. doi: 10.1016/S0720-048X(03)00178-5
[4] HSIEH J. Computed tomography: Principles, design, artifacts, and recent advances[M]. SPIE Press, 2003.
[5] SHRIMPTON P C, HILLIER M C, LEWIS M A, et al. Doses from computed tomography (CT) examinations in the UK-2003 review[M]. National Radiological Protection Board, 2005.
[6] SIGAL-CINQUALBRE A B, HENNEQUIN R, ABADA H T, et al. Low-kilovoltage multi-detector row chest CT in adults: Feasibility and effect on image quality and iodine dose[J]. Radiology, 2004, 231(1): 169−174. doi: 10.1148/radiol.2311030191
[7] YEDDER H B, CARDOEN B, HAMARNEH G. Deep learning for biomedical image reconstruction: A survey[J]. Artificial Intelligence Review, 2021, 54(1): 215−251.
[8] 柳澄, 秦维昌. 多层螺旋CT(一)[J]. 医学影像学杂志, 2000,10(3): 194−195. [9] SHANGGUAN H. Study on statistical iterative reconstruction methods for low-dose X-ray CT[D]. Taiyuan: North University of China, 2016.
[10] MA J, LIANG Z, FAN Y, et al. Variance analysis of X-ray CT sinograms in the presence of electronic noise background[J]. Medical Physics, 2012, 39(7): 4051−4065.
[11] ZHANG H, OUYANG L, MA J, et al. Noise correlation in CBCT projection data and its appli-cation for noise reduction in low-dose CBCT[J]. Medical Physics, 2014, 41(3): 031906. doi: 10.1118/1.4865782
[12] XIE S P, LUO L M, YANG L F, et al. Scatter correction method for cone beam CT using beam attenuation grid[C]//Key Engineering Materials. Trans Tech Publications Ltd, 2011, 480: 341-346.
[13] HSIEH J. Adaptive streak artifact reduction in computed tomography resulting from excessive X-ray photon noise[J]. Medical Physics, 1998, 25(11): 2139−2147.
[14] KACHELRIEβ M, WATZKE O, KALENDER W A. Generalized multi-dimensional adaptive filtering for conventional and spiral single-slice, multi-slice, and cone-beam CT[J]. Medical Physics, 2001, 28(4): 475−490.
[15] SAHINER B, YAGLE A E. Image reconstruction from projections under wavelet constraints[J]. IEEE Transactions on Signal Processing, 1993, 41(12): 3579−3584. doi: 10.1109/78.258101
[16] YAZDI M, BEAULIEU L. Artifacts in spiral X-ray CT scanners: Problems and solutions[J]. International Journal of Biological and Medical Sciences, 2008, 4(3): 135−139.
[17] DEMIRKAYA O. REDUCTION of noise and image artifacts in computed tomography by nonlinear filtration of projection images[C]//Medical Imaging 2001. International Society for Optics and Photonics, 2001: 917-923.
[18] WANG J, LL T, LU H, et al. Penalized weighted least-squares approach for low-dose X-ray computed tomography[C]//Medical Imaging 2006: Physics of Medical Imaging. International Society for Optics and Photonics, 2006, 6142: 614247.
[19] ZHANG Q, GUI Z, CHEN Y, et al. Bayesian sinogram smoothing with an anisotropic diffusion weighted prior for low-dose X-ray computed tomography[J]. Optik-International Journal for Light and Electron Optics, 2013, 124(17): 2811−2816. doi: 10.1016/j.ijleo.2012.08.045
[20] RUDIN L I, OSHER S, FATEMI E. Nonlinear total variation based noise removal algorithms[J]. Physica D: Nonlinear Phenomena, 1992, 60(1): 259−268.
[21] 梁宁宁, 李子恒, 王林元, 等. 一种基于GAN网络投影补全的有限角度CT重建算法[J]. 中国体视学与图像分析, 2019,24(1): 1−8. LIANG N, LI Z, WANG L, et al. A limited-angle computed tomography reconstruction algorithm based on projection completion via generative adversarial networks[J]. Chinese Journal of Stereology and Image Analysis, 2019, 24(1): 1−8. (in Chinese).
[22] GORDON R, BENDER R, HERMAN G T. Algebraic reconstruction techniques (ART) for three- dimensional electron microscopy and X-ray photography[J]. Journal of Theoretical Biology, 1970, 29(3): 471−481. doi: 10.1016/0022-5193(70)90109-8
[23] ZHANG H, WANG J, MA J, et al. Statistical models and regularization strategies in statistical image reconstruction of low-dose X-ray CT: A survey[J]. arXiv preprint arXiv: 1412.1732, 2014.
[24] LEVITAN E, HERMAN G T. A maximum a posteriori probability expectation maximization algorithm for image reconstruction in emission tomography[J]. IEEE Transactions on Medical Imaging, 1987, 6(3): 185−192. doi: 10.1109/TMI.1987.4307826
[25] HSIAO T, RANGARAJAN A, GINDI G. A new convex edge-preserving median prior with applications to tomography[J]. IEEE Transactions on Medical Imaging, 2003, 22(5): 580−585. doi: 10.1109/TMI.2003.812249
[26] LU Y, ZHAO J, WANG G. Few-view image reconstruction with dual dictionaries[J]. Physics in Medicine & Biology, 2011, 57(1): 173.
[27] BAI T, MOU X, XU Q, et al. Low-dose CT reconstruction based on multiscale dictionary[C]//International Society for Optics and Photonics. Medical Imaging 2013: Physics of Medical Imaging, 2013, 8668: 86683L.
[28] NIU S, GAO Y, BIAN Z, et al. Sparse-view X-ray CT reconstruction via total generalized variation regularization[J]. Physics in Medicine and Biology, 2014, 59(12): 2997. doi: 10.1088/0031-9155/59/12/2997
[29] LIU Y, LIANG Z, MA J, et al. Total variation-stokes strategy for sparse-view X-ray CT image reconstruction[J]. IEEE Transactions on Medical Imaging, 2013, 33(3): 749−763.
[30] LIANG K, YANG H, XING Y. Comparison of projection domain, image domain, and comprehensive deep learning for sparse-view X-ray CT image reconstruction[J]. arXiv preprint arXiv: 1804.04289, 2018.
[31] PELT D M, BATENBURG K J. Improving filtered backprojection reconstruction by data-dependent filtering[J]. IEEE Transactions on Image Processing, 2014, 23(11): 4750−4762. doi: 10.1109/TIP.2014.2341971
[32] WANG B, LIU H. FBP-Net for direct reconstruction of dynamic PET images[J]. Physics in Medicine & Biology, 2020, 65(23): 235008.
[33] ZHANG Q, LIANG D. Visualization of fully connected layer weights in deep learning CT reconstruction[J]. arXiv preprint arXiv: 2002.06788, 2020.
[34] ZHU B, LIU J Z, CAULEY S F, et al. Image reconstruction by domain-transform manifold learning[J]. Nature, 2018, 555(7697): 487−492. doi: 10.1038/nature25988
[35] FU L, DE MAN B. A hierarchical approach to deep learning and its application to tomographic reconstruction[C]//International Society for Optics and Photonics. 15th International Meeting on Fully Three-Dimensional Image Reconst- ruction in Radiology and Nuclear Medicine, 2019, 11072: 1107202.
[36] YE D H, BUZZARD G T, RUBY M, et al. Deep back projection for sparse-view CT reconstruction[C]//2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP). IEEE, 2018: 1-5.
[37] TAO X, ZHANG H, WANG Y, et al. VVBP-tensor in the FBP algorithm: Its properties and application in low-dose CT reconstruction[J]. IEEE transactions on medical imaging, 2019, 39(3): 764−776.
[38] TAO X, WANG Y, LIN L, et al. Learning to reconstruct CT images from the VVBP-tensor[J]. IEEE Transactions on Medical Imaging, 2021.
[39] LI Y, LI K, ZHANG C, et al. Learning to reconstruct computed tomography images directly from sinogram data under a variety of data acquisition conditions[J]. IEEE Transactions on Medical Imaging, 2019, 38(10): 2469−2481. doi: 10.1109/TMI.2019.2910760
[40] HE J, WANG Y, MA J. Radon inversion via deep learning[J]. IEEE Transactions on Medical Imaging, 2020, 39(6): 2076−2087. doi: 10.1109/TMI.2020.2964266
[41] WU W, HU D, NIU C, et al. DRONE: Dual-domain residual-based optimization network for sparse-view CT reconstruction[J]. IEEE Transactions on Medical Imaging, 2021.
[42] GE Y, SU T, ZHU J, et al. ADAPTIVE-NET: Deep computed tomography reconstruction network with analytical domain transformation knowledge[J]. Quantitative Imaging in Medicine and Surgery, 2020, 10(2): 415. doi: 10.21037/qims.2019.12.12
[43] ZHANG Q, HU Z, JIANG C, et al. Artifact removal using a hybrid-domain convolutional neural network for limited-angle computed tomography imaging[J]. Physics in Medicine & Biology, 2020, 65(15): 155010.
[44] LIN W A, LIAO H, PENG C, et al. Dudonet: Dual domain network for CT metal artifact reduction[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 10512-10521.
[45] WANG T, XIA W, HUANG Y, et al. DAN-Net: Dual-domain adaptive-scaling non-local network for CT metal artifact reduction[J]. arXiv Preprint arXiv: 2102.08003, 2021.
[46] WANG T, XIA W, LU Z, et al. IDOL-Net: An interactive dual-domain parallel network for CT metal artifact reduction[J]. arXiv Preprint arXiv: 2104.01405, 2021.
[47] RAN M, XIA W, HUANG Y, et al. Md-recon-net: A parallel dual-domain convolutional neural network for compressed sensing MRI[J]. IEEE Transactions on Radiation and Plasma Medical Sciences, 2020, 5(1): 120−135.
[48] CHEN H, ZHANG Y, CHEN Y, et al. LEARN: Learned experts'assessment-based reconstruction network for sparse-data CT[J]. IEEE Transactions on Medical Imaging, 2018, 37(6): 1333−1347. doi: 10.1109/TMI.2018.2805692
[49] WU D, KIM K, El Fakhri G, et al. Iterative low-dose CT reconstruction with priors trained by artificial neural network[J]. IEEE Transactions on Medical Imaging, 2017, 36(12): 2479−2486. doi: 10.1109/TMI.2017.2753138
[50] KANG E, CHANG W, YOO J, et al. Deep convolutional framelet denosing for low-dose CT via wavelet residual network[J]. IEEE Transactions on Medical Imaging, 2018, 37(6): 1358−1369. doi: 10.1109/TMI.2018.2823756
[51] GAO Y, LIANG Z, MOORE W, et al. A feasibility study of extracting tissue textures from a previous full-dose CT database as prior knowledge for Bayesian reconstruction of current low-dose CT images[J]. IEEE Transactions on Medical Imaging, 2019, 38(8): 1981−1992. doi: 10.1109/TMI.2018.2890788
[52] VENKATAKRISHNAN S V, BOUMAN C A, WOHLBERG B. Plug-and-play priors for model based reconstruction[C]//2013 IEEE Global Conference on Signal and Information Processing. IEEE, 2013: 945-948.
[53] SREEHARI S, VENKATAKRISHNAN S V, WOHLBERG B, et al. Plug-and-play priors for bright field electron tomography and sparse interpolation[J]. IEEE Transactions on Computational Imaging, 2016, 2(4): 408−423.
[54] CASCARANO P, PICCOLOMINI E L, MOROTTI E, et al. Plug-and-play external and internal priors for image restoration[J]. arXiv e-prints, 2021, arXiv: 2102.07510.
[55] ADLER J, OKTEM O. Learned Primal-dual reconstruction[J]. IEEE Transactions on Medical Imaging, 2017: 1322−1332.
[56] XIA W, LU Z, HUANG Y, et al. MAGIC: Manifold and graph integrative convolutional network for low-dose CT reconstruction[J]. IEEE Transactions on Medical Imaging, 2021.
[57] CHEN G, HONG X, DING Q, et al. AirNet: Fused analytical and iterative reconstruction with deep neural network regularization for sparse-data CT[J]. Medical Physics, 2020, 47(7): 2916−2930. doi: 10.1002/mp.14170
[58] GUPTA H, JIN K H, NGUYEN H Q, et al. CNN-based projected gradient descent for consistent CT image reconstruction[J]. IEEE Transactions on Medical Imaging, 2018, 37(6): 1440−1453. doi: 10.1109/TMI.2018.2832656
[59] ZHANG H, LIU B, YU H, et al. MetaInv-net: Meta inversion network for sparse view CT image reconstruction[J]. IEEE Transactions on Medical Imaging, 2020, 40(2): 621−634.
[60] BUADES A, COLL B, MOREL J M. A non-local algorithm for image denoising[C]//2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05). IEEE, 2005, 2: 60-65.
[61] CHEN Y, YANG Z, HU Y, et al. Thoracic low-dose CT image processing using an artifact suppressed large-scale nonlocal means[J]. Physics in Medicine & Biology, 2012, 57(9): 2667.
[62] CHEN Y, CHEN W, YIN X, et al. Improving low-dose abdominal CT images by weighted intensity averaging over large-scale neighborhoods[J]. European Journal of Radiology, 2011, 80(2): e42−e49. doi: 10.1016/j.ejrad.2010.07.003
[63] ZHONG J, NING R, CONOVER D. Image denoising based on multiscale singularity detection for cone beam CT breast imaging[J]. IEEE Transactions on Medical Imaging, 2004, 23(6): 696−703. doi: 10.1109/TMI.2004.826944
[64] FERUGLIO P F, VINEGONI C, GROS J, et al. Block matching 3D random noise filtering for absorption optical projection tomography[J]. Physics in Medicine & Biology, 2010, 55(18): 5401.
[65] CHEN Y, SHI L, FENG Q, et al. Artifact suppressed dictionary learning for low-dose CT image processing[J]. IEEE Transactions on Medical Imaging, 2014, 33(12): 2271−2292. doi: 10.1109/TMI.2014.2336860
[66] ZAMYATIN A, KATSEVICH G, KRYLOV R, et al. Adaptive multi-scale total variation minimization filter for low dose CT imaging[C]//SPIE Medical Imaging. International Society for Optics and Photonics, 2014: 903426-903426-7.
[67] MENG B, JIANG H, LIU Z, et al. Curvelet-based bilinear interpolation method for low-dose CT[C]//International Society for Optics and Photonics. Fifth International Conference on Digital Image Processing, 2013: 88783X-88783X-5.
[68] CHEN H, ZHANG Y, ZHANG W H. Low dose CT via convolutional neural network[J]. Biomedical Optics Express, 2017, 8(2): 679−694. DOI: 10.1364/BOE.8.000679.
[69] SHAN H M, ZHANG Y, YANG Q S, et al. 3-D convolutional encoder-decoder network for low-dose CT via transfer learning from a 2-D trained network[J]. IEEE Transactions on Medical Imaging, 2018, 37(6): 1522−1534. DOI: 10.1109/TMI.2018.2832217.
[70] YANG Q S, YAN P K, ZHANG Y B, et al. Low-dose CT image denoising using a generative adversarial network with wasserstein distance and perceptual loss[J]. IEEE Transactions on Medical Imaging, 2018, 36(7): 1348−1357. DOI: 10.1109/TMI.2018.2827462.
[71] ZHU J Y, PARK T, ISOLA P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//Proceedings of the IEEE international conference on computer vision. 2017: 2223-2232.
[72] CHEN Y. CT-LSTM: Detection & estimation duplexed system for robust object tracking[C]//Proceedings of the 2nd International Conference on Computer Science and Application Engineering, 2018: 1-7.
[73] BURGER H C, SCHULER C J, HARMELING S. Image denoising: Can plain neural networks compete with BM3D?[C]//2012 IEEE conference on computer vision and pattern recognition. IEEE, 2012: 2392-2399.
[74] CHEN H, ZHANG Y, KALRA M, et al. Low-dose CT with a residual encoder-decoder convolutional neural network[J]. IEEE Transactions on Medical Imaging, 2017, 2(3): 2524−2535. DOI: 10.1109/TMI.2017.2715284.
[75] HEINRICH M P, STILLE M, BUZUG T M. Residual U-net convolutional neural network architecture for low-dose CT denoising[J]. Current Directions in Biomedical Engineering, 2018, 4(1): 297−300. doi: 10.1515/cdbme-2018-0072
[76] WU D F, KIM K, FAKHRI G E, et al. A cascaded convolutional neural network for X-ray low-dose CT image denoising[OL]. https://arxiv.org/abs/1705.04267, 2017.5.
[77] SHAN H, PADOLE A, HOMAYOUNIEH F, et al. Competitive performance of a modularized deep neural network compared to commercial algorithms for low-dose CT image reconstruction[J]. Nature Machine Intelligence, 2019, 1(6): 269−276. doi: 10.1038/s42256-019-0057-9
[78] WOLTERINK J M, LEINER T, VIERGEVER M A, et al. Generative adversarial networks for noise reduction in low-dose CT[J]. IEEE Transactions on Medical Imaging, 2017, 36(12): 2536−2545. DOI: 10.1109/TMI.2017.2708987.
[79] YI X, BABYN P. Sharpness-aware low-dose CT denoising using conditional generative adversarial network[J]. Journal of Digital Imaging, 2018, 31(5): 655−669. doi: 10.1007/s10278-018-0056-0
[80] ISOLA P, ZHU J Y, ZHOU T, et al. Image-to-image translation with conditional adversarial networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 1125-1134.
[81] YANG L, SHANGGUAN H, ZHANG X, et al. High-frequency sensitive generative adversarial network for low-dose CT image denoising[J]. IEEE Access, 2019, 8: 930−943.
[82] YOU C, YANG L, ZHANG Y, et al. Low-dose CT via deep CNN with skip connection and network-in- network[C]//Developments in X-ray Tomography XII. International Society for Optics and Photonics, 2019, 11113: 111131W.
[83] ARJOVSKY M, CHINTALA S, BOTTOU L. Wasserstein gan[J]. arXiv Preprint arXiv: 1701.07875, 2017.
[84] LI X, YE C, YAN Y, et al. Low-dose CT image denoising based on improved WGAN-gp[J]. Journal of New Media, 2019, 1(2): 75. doi: 10.32604/jnm.2019.06259
[85] MAO X, LI Q, XIE H, et al. Least squares generative adversarial networks[C]//Proceedings of the IEEE International Conference on Computer Vision, 2017: 2794-2802.
[86] MIYATO T, KATAOKA T, KOYAMA M, et al. Spectral normalization for generative adversarial networks[J]. arXiv Preprint arXiv: 1802.05957, 2018.
[87] PARK H S, BAEK J, YOU S K, et al. Unpaired image denoising using a generative adversarial network in X-ray CT[J]. IEEE Access, 2019, 7: 110414−110425. doi: 10.1109/ACCESS.2019.2934178
[88] DU W, CHEN H, LIAO P, et al. Visual attention network for low-dose CT[J]. IEEE Signal Processing Letters, 2019, 26(8): 1152−1156. doi: 10.1109/LSP.2019.2922851
[89] RAJEEV R, SAMATH J A, KARTHIKEYAN N K. An intelligent recurrent neural network with long short-term memory (LSTM) BASED batch normalization for medical image denoising[J]. Journal of Medical Systems, 2019, 43(8): 234. doi: 10.1007/s10916-019-1371-9
[90] YANG Q, YAN P, KALRA M K, et al. CT image denoising with perceptive deep neural networks[J]. arXiv Preprint arXiv: 1702.07019, 2017.
[91] YOU C, YANG Q, SHAN H, et al. Structurally-sensitive multi-scale deep neural network for low-dose CT denoising[J]. IEEE Access, 2018, 6: 41839−41855. doi: 10.1109/ACCESS.2018.2858196
[92] GU J, YE J C. AdaIN-based tunable cycleGAN for efficient unsupervised low-dose CT denoising[J]. IEEE Transactions on Computational Imaging, 2021, 7: 73−85. doi: 10.1109/TCI.2021.3050266
[93] KWON T, YE J C. Cycle-free cycleGAN using invertible generator for unsupervised low-dose CT denoising[J]. arXiv Preprint arXiv: 2104.08538, 2021.
[94] LIAO H, LIN W A, ZHOU S K, et al. Adn: Artifact disentanglement network for unsupervised metal artifact reduction[J]. IEEE Transactions on Medical Imaging, 2019, 39(3): 634−643.
[95] KO Y, MOON S, BAEK J, et al. Rigid and non-rigid motion artifact reduction in X-ray CT using attention module[J]. Medical Image Analysis, 2021, 67: 101883. doi: 10.1016/j.media.2020.101883
[96] LI M, HSU W, XIE X, et al. SACNN: Self-attention convolutional neural network for low-dose CT denoising with self-supervised perceptual loss network[J]. IEEE Transactions on Medical Imaging, 2020, 39(7): 2289−2301. doi: 10.1109/TMI.2020.2968472
[97] ZHANG H, PATEL V M. Density-aware single image de-raining using a multi-stream dense network[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018: 695-704.
[98] HU X, FU C W, ZHU L, et al. Depth-attentional features for single-image rain removal[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 8022-8031.
[99] GUO S, YAN Z, ZHANG K, et al. Toward convolutional blind denoising of real photographs[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 1712-1722.
[100] WANG Y, GONG D, YANG J, et al. An effective two-branch model-based deep network for single image deraining[J]. arXiv Preprint arXiv: 1905.05404, 2019.
[101] WANG Y, ZHANG H, LIU Y, et al. Gradient information guided deraining with a novel network and adversarial training[J]. arXiv Preprint arXiv: 1910.03839, 2019.
[102] MCCOLLOUGH C L, CHEN B, HOLMES D, et al. Low dose CT image and projection data (LDCT-and-Projection-data)[DB/OL]. The Cancer Imaging Archive, 2020. (2020-00-00) [2021-05-20]. https://doi.org/10.7937/9npb-2637.
[103] The American Association of Physicists in Medicine (AAPM). Low dose CT grand challenge[DB/OL]. (2017-00-00) [2021-05-20]. http://www.aapm.org/GrandChallenge/LowDoseCT/.
[104] National biomedical imaging archive[DB/OL]. (2021-00-00) [2021-05-20]. NCIP/national-biomedical-image-archive.
[105] CLARK K, VENDT B, SMITH K, et al. The cancer imaging archive (TCIA) maintaining and operating a public information repository[J]. Journal of Digital Imaging, 2013, 26(6): 1045-1057.
[106] LINGLE W, ERICKSON B, ZULEY M, et al. Radiology data from the cancer genome atlas breast invasive carcinoma [TCGA-BRCA] collection[DB/OL]. The Cancer Imaging Archive, (2016-00-00) [2021-05-20]. http://wiki.cancerimagingarchive.net/.
[107] YI X. Piglet Dataset[DB/OL]. (2019-00-00)[2021-05-20] http://homepage.usask.ca/~xiy525/.