Advances in Research on Low-dose CT Imaging Algorithm Based on Deep Learning
-
摘要:
计算机断层扫描成像(CT)技术具有成像速度快分辨率高的优点,广泛应用于医学临床诊断中。然而,提高剂量辐射会引发人体组织器官受损,降低剂量又会造成成像质量严重下降。为解决上述矛盾,在确保成像质量满足临床诊断需求的条件下,研究如何最大程度地降低X射线辐射对人体造成的伤害,已成为低剂量CT成像技术的研究热点。近年来,在人工智能领域深度学习方法快速发展,已广泛应用于图像处理、模式识别、信号处理等领域。与此同时,大数据驱动下的深度学习方法在LDCT成像领域的应用也有了长足的发展。本文从CT成像的过程、低剂量CT噪声建模以及成像算法的设计3方面,介绍近年来国内外低剂量CT成像算法的发展,尤其对深度学习领域的成像算法进行阐述与分析,并对LDCT图像成像领域未来的发展进行展望。
Abstract:Computed tomography (CT) is widely used in clinical diagnosis because of its fast imaging speed and high resolution. However, higher doses of radiation will cause damages to human tissues and organs, while lower doses will lead to serious deterioration of imaging quality. In order to solve the above contradiction, researchers have focused on the low-dose CT imaging technology to study how to reduce the harm caused by radiation to the human body to the greatest extent under the condition of ensuring the imaging quality to meet the needs of clinical diagnosis. In recent years, deep learning has developed rapidly in the field of artificial intelligence, and has been widely used in image processing, pattern recognition, signal processing fields. Driven by big data, LDCT imaging algorithms based on deep learning have made great progress. This paper studies the development of low-dose CT imaging algorithms in recent years in terms of three aspects: the process of CT imaging, the noise modeling of low-dose CT, and the design of imaging algorithms. In particular, the imaging algorithms in the field of deep learning are systematically elaborated and analyzed. Finally, future developments in the field of LDCT image artifact suppression are also prospected.
-
Keywords:
- deep learning /
- low dose CT /
- artifact suppression /
- noise modeling
-
冠状动脉疾病的病死率居高不下,在全球范围内造成了巨大的经济负担和人文负担[1]。随着多层螺旋CT的发展,冠状动脉CT血管造影(coronary artery CT angiography,CCTA)已经成为临床无创评估及诊断冠状动脉疾病的首选检查方法[2-3]。然而CCTA检查过高的电离辐射也受到人们更多的关注[4]。
目前有多种降低CCTA辐射剂量的方法,如前瞻性心电门控扫描、自动管电压和管电流技术、应用迭代重建算法等[2, 5-8]。应用β受体阻滞剂降低患者心率也可间接降低辐射剂量[4]。控制心率对图像质量至关重要,对低心率患者使用较窄的扫描时间窗也可降低患者的辐射剂量[9]。
本研究旨在探究不同心率CCTA检查的最佳采集时相,优化不同心率患者CCTA的心电图扫描时间窗,比较不同心率CCTA的全心动周期(时间窗采集范围为0%~100%)与缩窄扫描时间窗后图像质量和辐射剂量的差异。同时部分患者以 数字减影血管造影(digital subtraction angiography,DSA)结果作为金标准,比较两组患者CCTA检查对冠状动脉及其分支狭窄程度的诊断准确性。
1. 材料与方法
1.1 一般资料
本研究经我院伦理委员会批准,全部患者签署检查知情同意书。前瞻性收集我院自2023年1月至11月行CCTA的患者
1000 例,其中男513例,女487例,年龄58岁(49,66),心率范围37~117 bpm,所有患者均未接受舌下含服硝酸甘油。根据患者CCTA扫描时的心率将患者分为A1和B1亚组(BMP<51 bpm)、A2和B2亚组(51 bpm≤BMP≤55 bpm)、A3和B3亚组(56 bpm≤BMP≤60 bpm)、A4和B4亚组(61 bpm≤BMP≤65 bpm)、A5和B5亚组(66 bpm≤BMP≤70 bpm)、A6和B6亚组(71 bpm≤BMP≤75 bpm)、A7和B7亚组(76 bpm≤BMP≤80 bpm)、A8和B8亚组(81 bpm≤BMP≤85 bpm)、A9和B9亚组(BMP>85 bpm)。
排除标准:①碘对比剂过敏;②不能配合屏气;③妊娠;④心脏起搏器植入;⑤心、肾功能不全。
1.2 扫描方案
所有患者在uCT 968 CT扫描仪(联影医疗科技股份有限公司)上进行扫描。扫描参数如下:
管电压:自动管电压调制技术;管电流:ECG mA调制,150~180 mA,管电压及管电流具体数值根据患者体型进行调节;机架转速0.25 s/r;SFOV:20 cm×20 cm;探测器准直宽度:120~160 mm,根据患者心脏大小决定。
两组患者均采用前瞻式心电门控,扫描范围从气管隆嵴下至心底下0.5 cm处。A组患者行全心动周期(时间窗采集范围为0%~100%)进行图像采集。扫描结束后,以5%为间隔重建0%~95%间期的20组图像,根据右冠状动脉、左冠状动脉前降支和回旋支的图像质量确定最佳的一组图像,再以1%的间隔重建以上图像的前后各4组图像,选择以上9组图像中质量最佳的一套图像,作为各亚组患者图像的最佳采集时相,以最佳采集时相的95%置信区间作为优化扫描时间窗,即最佳重建时相(平均值±标准差)×2[10]。B组各亚组患者采用A组优化后的扫描时间窗行CCTA检查。
所有患者使用高压注射器(Ulrichmissouri-XD2001)将碘对比剂(碘佛醇,320 mgI/mL)以4.0 mL/s的速率注射,注射总量为0.8 mL/kg,注射完成后以4.0 mL/s速率注射30 mL生理盐水。
扫描采用自动扫描触发技术,将6 mm2感兴趣区(region of interest,ROI)放置于四腔心层面的降主动脉,ROI内CT值达到132 HU后延迟6 s启动扫描。
原始图像的层厚及层间距均为0.5 mm,图像重建矩阵:512×512,使用C-SOFT-AA滤波函数和混合迭代重建技术(KARL 3D)进行图像重建,迭代权重为8。
1.3 图像分析
客观评价。测量升主动脉中心,右冠状动脉、左前降支及左回旋支的近段以及同一层面的心包脂肪组织CT值和标准差(standard deviation,SD),测量时避开病变和伪影。计算所有图像的信噪比(signal-to-noise ratio,SNR)、对比度噪声比(contrast-to-noise ratio,CNR),计算公式:
$$ {\mathrm{SNR}}=血管平均{\mathrm{CT}}值/血管平均{\mathrm{SD}}值, $$ (1) $$ \mathrm{CNR}=\frac{\left(血管平均\mathrm{CT}值-心包脂肪平均\mathrm{CT}值\right)}{心包脂肪\mathrm{SD}值}。 $$ (2) 记录所有患者的容积CT剂量指数(volume CT dose index,CTDIvol)和剂量长度乘积(dose length product,DLP),并计算有效剂量(effective dose,ED),
$$ {\mathrm{ED}}={\mathrm{DLP}} \times K,$$ (3) 其中,成人心脏系数K=0.014 mSv/mGy·cm[11]。
冠脉血管狭窄程度的计算公式为:
$$ \frac{\left(冠脉血管狭窄段直径-狭窄处的直径\right)}{狭窄血管直径}\times100\%。 $$ (4) 狭窄程度<50%为轻度狭窄,50%~70%为中度狭窄,70%~100%为重度狭窄,100%为完全闭塞,冠脉狭窄≥50%认为是阳性。以DSA结果为金标准,评估CCTA结果对冠状动脉节段狭窄的诊断准确性,并计算CCTA的敏感性和特异性。
$$ 敏感性=\frac{真阳性数}{\left(真阳性数+假阴性数\right)}\times100\%, $$ (5) $$ 特异性=\frac{真阴性数}{\left(真阴性数+假阳性数\right)}\times100\%。 $$ (6) 主观评价。按照SCCT 18段分段标准,评价左前降支、左回旋支和右冠状动脉的图像质量。图像由两位具备10年以上诊断经验的放射医师采用双盲法进行评价:1分,血管伪影严重,无法诊断;2分,血管部分节段伪影明显,影响诊断;3分,血管部分节段轻度伪影,不影响诊断;4分,血管无伪影。最终取两名医师评分值的平均值。主观评分值≥3分认为满足临床诊断要求[12]。
1.4 DSA检查
术前常规向患者简要介绍DSA过程,进行术前沟通,减轻患者紧张情绪。
患者取平卧位,连接心电监护,常规消毒右侧上肢,铺无菌巾,肝素盐水冲洗所有介入器材,取右侧桡动脉途径。1%利多卡因局麻。采用Seldinger穿刺法穿刺成功后置入6F动脉防漏鞘管,注入普通肝素
2500 U、硝酸甘油200 μg。透视下送入导丝,应用5F造影导管行冠状动脉造影。1.5 统计学分析
使用SPSS 26.0软件进行统计学分析。
计数资料采用频数和百分比(n,%)表示,组间比较采用卡方检验。采用Shapiro-Wilk检验分析连续变量数据的正态性,不符合正态分布的连续变量以M(Q1, Q3)表示,并采用Mann-Whitney检验分析差异;符合正态分布且方差齐性的变量以
$ ({\bar{x }}\pm s) $ 表示,并采用单因素方差分析比较组间差异。用 Kappa检验评估观察者间的一致性。k<0.40为一致性较差,0.4≤k<0.75为一致性一般,k≥0.75为一致性较好。P<0.05为差异有统计学意义。
2. 结果
2.1 一般情况分析
两组患者均顺利完成CCTA检查。
结果显示:两组患者的年龄、性别、身高和体重均无统计学差异。
2.2 A组中各亚组CCTA最佳采集时相及扫描时间窗
A组中各亚组CCTA的最佳采集时相及扫描时间窗见表1和图1。
表 1 A组各亚组患者CCTA的最佳采集时相及扫描时间窗Table 1. Optimal reconstruction phase and acquisition time window of CCTA for subgroups in Group A心率/bpm 最佳采集时相 最佳扫描时间窗 <51 73±6 61~85 51~55 76±4 68~84 56~60 76±3 70~82 40±3 34~46 61~65 76±3 70~82 40±3 34~46 66~70 76±3 70~82 42±3 36~48 71~75 77±6 65~89 44±3 38~50 76~80 46±5 36~56 76±4 68~84 81~85 46±4 38~54 >85 48±5 38~58 2.3 客观评价
两组患者图像的客观评价值见表2,均无统计学差异。
表 2 A组和B组CCTA图像客观评价及辐射剂量表Table 2. Objective evaluation of CCTA images and radiation doses for groups A and B项目 组别 统计检验 A组 B组 F/Z P 降主动脉 CT值a 457.48±60.59 466.80±61.77 3.457 0.063 标准差a 18.42±2.79 17.95±2.24 5.182 0.230 右冠状动脉 CT值b 439.50(391.60,489.50) 442.85(403.00,492.43) −0.821 0.412 标准差a 16.24±2.25 15.58±1.79 15.912 0.593 左冠状动脉前降支 CT值 433.38±60.85 426.56±60.04 0.644 0.423 标准差b 15.75(13.13,17.40) 15.65(13.43,17.30) −1.205 0.228 左冠状动脉回旋支 CT值b 437.05(401.17,477.87) 426.45(383.25,472.03) −0.433 0.665 标准差a 14.88±3.06 15.57±2.61 3.064 0.082 SNRb 28.88(25.81,3.163) 27.40(24.40,30.79) −1.869 0.062 CNRb 33.66(30.35,36.78) 31.96(29.48,35.34) −1.640 0.101 CTDIvolb/mGy 30.13(27.56,32.67) 17.89(16.24,19.36) 11.952 <0.05 DLPb/mGy·cm 414.18(380.58,456.61) 247.52(217.20,270.18) 11.750 <0.05 EDb/mSv 5.80(5.33,6.39) 3.47(3.04,3.78) 11.750 <0.05 注:a为$(\bar x\pm s )$,b为$M(Q_1,\;Q_3) $。 两组患者的CTDIvol中位数、DLP中位数、ED中位数见表2。统计学结果显示:B组CTDIvol中位数和DLP中位数、ED中位数较A组分别下降40.62%、40.24%和40.17%。
各亚组患者的ED中位数见表3。统计学结果显示,A组和B组各亚组的ED中位数差异有统计学意义。
表 3 A组和B组各亚组患者的辐射剂量Table 3. Radiation doses for subgroups of groups A and B心率/bpm ED中位数/mSv Z P 人数 A组 B组 A组 B组 <51 7.36(7.01,7.61) 3.21(2.84,4.01) −5.945 <0.05 31 60 51~55 6.57(6.38,6.87) 3.99(3.22,4.17) −7.924 <0.05 73 70 56~60 6.13(5.92,6.40) 3.79(3.61,3.88) −10.160 <0.05 136 103 61~65 5.68(5.49,5.87) 3.54(3.49,3.61) −8.916 <0.05 86 92 66~70 5.46(5.26,5.64) 3.33(3.18,3.46) −7.440 <0.05 76 57 71~75 5.18(5.09,5.46) 3.30(3.08,3.37) −6.420 <0.05 43 50 76~80 4.92(4.53,5.07) 3.11(2.77,3.19) −4.899 <0.05 24 30 81~85 4.64(4.55,4.81) 2.07(2.01,3.05) −3.762 <0.05 17 16 >85 4.44(4.43,4.62) 1.98(1.98,1.99) −3.606 <0.05 14 22 本研究将32例患者的546节段冠脉进行诊断,其中包含A组患者279节段,B组患者267节段。在A组中CCTA检查的敏感性为83.64%(46/55)特异性为94.64%(212/224),B组中CCTA检查的敏感性为84.91%(45/53)特异性为92.99%(199/214),两组的敏感性和特异性无明显差异。CCTA结果和DSA进行比较,结果表明:两组患者CCTA检查对冠状动脉分支狭窄的敏感性与特异性无明显差异。
2.4 主观评价
两位放射医师对两组患者CCTA图像的主观评分值见表4。
表 4 A组和B组CCTA图像主观评价表Table 4. Subjective evaluation of CCTA images for groups A and B分组 组别 统计检验 A B $\chi^2 $ P 右冠状动脉 3分 247(49.4) 236(47.2) 0.126 0.722 4分 253(50.6) 264(52.8) 左冠状动脉前降支 3分 354(70.8) 326(65.2) 0.827 0.363 4分 146(29.2) 174(34.8) 左冠状动脉回旋支 3分 349(69.8) 315(63.0) 1.100 0.294 4分 151(30.2) 185(37.0) 统计学分析显示:两组患者CCTA图像的主观评分值无统计学差异。两名放射医师的评分一致性较好,k值为0.894。
3. 讨论
患者心率高低是CCTA检查成功的关键因素之一[13]。前瞻性心电门控技术只在心脏运动相对较缓慢的时间进行数据采集,在心电图上的扫描时间窗较普通单心动周期模式缩小,在扫描时间以外的时间减少曝光剂量甚至不曝光,所以是降低CCTA检查辐射剂量的有效手段之一[13]。
有研究证明:低心率(BMP<61 bpm)患者CCTA图像的最佳采集时相位于舒张中末期,高心率(BMP>75 bpm)患者CCTA图像的最佳采集时相位于收缩末期;中心率(61 bpm≤BMP≤75 bpm)患者最佳采集时相位于收缩末期或舒张中末期[9-14]。但本研究结果与上述结论有部分不符。
本研究中A3亚组(56 bpm≤BMP≤60 bpm)中有12.50%的患者最佳采集时相位于收缩期;A7亚组(76 bpm≤BMP≤80 bpm)中有29.41%的患者最佳采集时相位于舒张期。究其原因可能为A3亚组患者心率较缓慢,收缩期时间较长,所以收缩期也存在心脏运动较缓慢的时相。A7亚组患者在舒张期仍存在最佳时相,在此心率下仍可进行成像。
本研究还发现80 bmp为CCTA图像从舒张期重建到收缩期重建的临界心率。另外,随着心率增快,最佳重建时相逐渐向后偏移。
为保证CCTA检查的成功率和图像质量,进行CCTA检查时往往采用全心动周期或较宽的时间窗进行扫描[13]。这导致患者的辐射剂量大大增加。
本研究结果显示,A组和B组的图像质量无统计学差异。但B组的ED中位数较A组下降42.56%,这是因为B组扫描时间窗较A组缩窄,患者接受X线曝光时间缩短。B1亚组、B2亚组(BMP<55 bpm)、B8亚组和B9亚组(BMP>80 bpm)的辐射剂量降低尤为明显,较A1亚组、A2亚组、A8亚组和A9亚组分别下降56.39%、39.27%、55.39%和55.41%。这是因为B1、B2、B8以及B9仅有一个扫描时间窗,而B组中其他各组均有两个扫描时间窗,且在两个扫描时间窗之间仍有低剂量X线进行曝光。
本研究对部分患者CCTA结果和DSA进行比较,两组患者CCTA检查对冠状动脉分支狭窄的敏感性与特异性无明显差异,此结果表明缩窄采集时间窗不会降低对冠状动脉狭窄的诊断效能。
本研究的不足:①各亚组间样本量不相同,高心率患者样本量较少,后续应增加高心率的样本数量;②本研究属于单中心研究;③未对所有患者的CCTA结果和DSA结果进行比对验证。
综上所述,不同心率患者具有不同的最佳采集时相。低心率患者(BMP<55 bpm)可将扫描窗缩窄至舒张中末期;中心率患者(55 bpm≤BMP≤80 bpm)可将扫描窗缩窄至收缩末期或舒张中末期;高心率患者(BMP>80 bpm)可将扫描窗缩窄至收缩末期。通过优化CCTA扫描时间窗可在保证图像质量前提下大大降低患者的辐射剂量。
-
1 典型的CT成像算法在现有数据集上性能比较
方法 主要特点 优点 缺点 传统 CT 重建算法 FBP 解析类且最基础的重建算法 成像速度快,鲁棒性好 对稀疏角度 CT 重建质量不佳 TV-POCS 迭代重建类算法,使用了 TV 正则化项进行约束 研究对象为原始数据,不容易丢失信息,降噪效果优于 FBP 算法 降噪结果容易产生块状伪影,且部分重要的细微结构被平滑 传统 CT 后处理算法 BM3D 基于块匹配的后处理 细节保留能力优于 TV- POCS 降噪结果出现了模糊与失真 K-SVD 基于字典学习的后处理 算法运算时间较长,降噪结果中仍然存在部分伪影 基于深度学习的 CT 重建算法 iCT-Net 用 CNN 学习 FBP:投影数据扩展;滤波;反投影;求和 对稀疏角度、短扫描内部扫描 CT 重建效果良好 并未解决锥束 CT 重建问题,网络参数较多 LEARN 迭代展开类算法,对“fields of experts”进行展开,并用 CNN 进行学习 重建结果保留了更多的边缘与细节,比传统迭代算法更高效 鲁棒性较差,对正则化函数具有一定的限制 DRONE 双域残差优化网络 重建精度高 需要更多数据集进行训练 基于深度学习的 CT 后处理算法 RED-CNN CNN 网络,包括 5 层编码与 5 层解码,其中初始输入、第 2、4 层编码端特征通过残差连接并入相应解码端 降噪效果优于传统后处理算法,降噪结果中伪影残留量较少 降噪结果容易产生图像过平滑现象,丢失了一些细微信息,如血管等 WGAN-VGG GAN 网络,G:8 层 conv,D:6 层 conv,2 层 FC,损失函数:WGAN+VGG 训练稳定性较好,能够有效缓解图像过平滑问题 在抑制伪影的过程中易破坏图像原有结构,引入新的噪声 SACNN CNN 网络,同时采用自注意力与自编码模块 在伪影抑制与结构保留方面实现了较好的平衡 降噪结果中仍然存在部分噪声,细微结构产生了失真 表 1 典型的CT成像算法在现有数据集上性能比较
Table 1 Performance comparison of typical CT imaging algorithms on existing data sets
方法 主要特点 优点 缺点 传统 CT 重建算法 FBP 解析类且最基础的重建算法 成像速度快,鲁棒性好 对稀疏角度 CT 重建质量不佳 TV-POCS 迭代重建类算法,使用了 TV 正则化项进行约束 研究对象为原始数据,不容易丢失信息,降噪效果优于 FBP 算法 降噪结果容易产生块状伪影,且部分重要的细微结构被平滑 传统 CT 后处理算法 BM3D 基于块匹配的后处理 细节保留能力优于 TV- POCS 降噪结果出现了模糊与失真 K-SVD 基于字典学习的后处理 算法运算时间较长,降噪结果中仍然存在部分伪影 基于深度学习的 CT 重建算法 iCT-Net 用 CNN 学习 FBP:投影数据扩展;滤波;反投影;求和 对稀疏角度、短扫描内部扫描 CT 重建效果良好 并未解决锥束 CT 重建问题,网络参数较多 LEARN 迭代展开类算法,对“fields of experts”进行展开,并用 CNN 进行学习 重建结果保留了更多的边缘与细节,比传统迭代算法更高效 鲁棒性较差,对正则化函数具有一定的限制 DRONE 双域残差优化网络 重建精度高 需要更多数据集进行训练 基于深度学习的 CT 后处理算法 RED-CNN CNN 网络,包括 5 层编码与 5 层解码,其中初始输入、第 2、4 层编码端特征通过残差连接并入相应解码端 降噪效果优于传统后处理算法,降噪结果中伪影残留量较少 降噪结果容易产生图像过平滑现象,丢失了一些细微信息,如血管等 WGAN-VGG GAN 网络,G:8 层 conv,D:6 层 conv,2 层 FC,损失函数:WGAN+VGG 训练稳定性较好,能够有效缓解图像过平滑问题 在抑制伪影的过程中易破坏图像原有结构,引入新的噪声 SACNN CNN 网络,同时采用自注意力与自编码模块 在伪影抑制与结构保留方面实现了较好的平衡 降噪结果中仍然存在部分噪声,细微结构产生了失真 -
[1] BRENNER D J, HALL E J. Computed tomography: An increasing source of radiation exposure[J]. New England Journal of Medicine, 2007, 357(22): 2277−2284.
[2] SMITH-BINDMAN R, LIPSON J, MARCUS R, et al. Radiation dose associated with common computed tomography examinations and the associated lifetime attributable risk of cancer[J]. Archives of Internal Medicine, 2009, 169(22): 2078−2086. doi: 10.1001/archinternmed.2009.427
[3] HART D, WALL B F. UK population dose from medical X-ray examinations[J]. European Journal of Radiology, 2004, 50(3): 285−291. doi: 10.1016/S0720-048X(03)00178-5
[4] HSIEH J. Computed tomography: Principles, design, artifacts, and recent advances[M]. SPIE Press, 2003.
[5] SHRIMPTON P C, HILLIER M C, LEWIS M A, et al. Doses from computed tomography (CT) examinations in the UK-2003 review[M]. National Radiological Protection Board, 2005.
[6] SIGAL-CINQUALBRE A B, HENNEQUIN R, ABADA H T, et al. Low-kilovoltage multi-detector row chest CT in adults: Feasibility and effect on image quality and iodine dose[J]. Radiology, 2004, 231(1): 169−174. doi: 10.1148/radiol.2311030191
[7] YEDDER H B, CARDOEN B, HAMARNEH G. Deep learning for biomedical image reconstruction: A survey[J]. Artificial Intelligence Review, 2021, 54(1): 215−251.
[8] 柳澄, 秦维昌. 多层螺旋CT(一)[J]. 医学影像学杂志, 2000,10(3): 194−195. [9] SHANGGUAN H. Study on statistical iterative reconstruction methods for low-dose X-ray CT[D]. Taiyuan: North University of China, 2016.
[10] MA J, LIANG Z, FAN Y, et al. Variance analysis of X-ray CT sinograms in the presence of electronic noise background[J]. Medical Physics, 2012, 39(7): 4051−4065.
[11] ZHANG H, OUYANG L, MA J, et al. Noise correlation in CBCT projection data and its appli-cation for noise reduction in low-dose CBCT[J]. Medical Physics, 2014, 41(3): 031906. doi: 10.1118/1.4865782
[12] XIE S P, LUO L M, YANG L F, et al. Scatter correction method for cone beam CT using beam attenuation grid[C]//Key Engineering Materials. Trans Tech Publications Ltd, 2011, 480: 341-346.
[13] HSIEH J. Adaptive streak artifact reduction in computed tomography resulting from excessive X-ray photon noise[J]. Medical Physics, 1998, 25(11): 2139−2147.
[14] KACHELRIEβ M, WATZKE O, KALENDER W A. Generalized multi-dimensional adaptive filtering for conventional and spiral single-slice, multi-slice, and cone-beam CT[J]. Medical Physics, 2001, 28(4): 475−490.
[15] SAHINER B, YAGLE A E. Image reconstruction from projections under wavelet constraints[J]. IEEE Transactions on Signal Processing, 1993, 41(12): 3579−3584. doi: 10.1109/78.258101
[16] YAZDI M, BEAULIEU L. Artifacts in spiral X-ray CT scanners: Problems and solutions[J]. International Journal of Biological and Medical Sciences, 2008, 4(3): 135−139.
[17] DEMIRKAYA O. REDUCTION of noise and image artifacts in computed tomography by nonlinear filtration of projection images[C]//Medical Imaging 2001. International Society for Optics and Photonics, 2001: 917-923.
[18] WANG J, LL T, LU H, et al. Penalized weighted least-squares approach for low-dose X-ray computed tomography[C]//Medical Imaging 2006: Physics of Medical Imaging. International Society for Optics and Photonics, 2006, 6142: 614247.
[19] ZHANG Q, GUI Z, CHEN Y, et al. Bayesian sinogram smoothing with an anisotropic diffusion weighted prior for low-dose X-ray computed tomography[J]. Optik-International Journal for Light and Electron Optics, 2013, 124(17): 2811−2816. doi: 10.1016/j.ijleo.2012.08.045
[20] RUDIN L I, OSHER S, FATEMI E. Nonlinear total variation based noise removal algorithms[J]. Physica D: Nonlinear Phenomena, 1992, 60(1): 259−268.
[21] 梁宁宁, 李子恒, 王林元, 等. 一种基于GAN网络投影补全的有限角度CT重建算法[J]. 中国体视学与图像分析, 2019,24(1): 1−8. LIANG N, LI Z, WANG L, et al. A limited-angle computed tomography reconstruction algorithm based on projection completion via generative adversarial networks[J]. Chinese Journal of Stereology and Image Analysis, 2019, 24(1): 1−8. (in Chinese).
[22] GORDON R, BENDER R, HERMAN G T. Algebraic reconstruction techniques (ART) for three- dimensional electron microscopy and X-ray photography[J]. Journal of Theoretical Biology, 1970, 29(3): 471−481. doi: 10.1016/0022-5193(70)90109-8
[23] ZHANG H, WANG J, MA J, et al. Statistical models and regularization strategies in statistical image reconstruction of low-dose X-ray CT: A survey[J]. arXiv preprint arXiv: 1412.1732, 2014.
[24] LEVITAN E, HERMAN G T. A maximum a posteriori probability expectation maximization algorithm for image reconstruction in emission tomography[J]. IEEE Transactions on Medical Imaging, 1987, 6(3): 185−192. doi: 10.1109/TMI.1987.4307826
[25] HSIAO T, RANGARAJAN A, GINDI G. A new convex edge-preserving median prior with applications to tomography[J]. IEEE Transactions on Medical Imaging, 2003, 22(5): 580−585. doi: 10.1109/TMI.2003.812249
[26] LU Y, ZHAO J, WANG G. Few-view image reconstruction with dual dictionaries[J]. Physics in Medicine & Biology, 2011, 57(1): 173.
[27] BAI T, MOU X, XU Q, et al. Low-dose CT reconstruction based on multiscale dictionary[C]//International Society for Optics and Photonics. Medical Imaging 2013: Physics of Medical Imaging, 2013, 8668: 86683L.
[28] NIU S, GAO Y, BIAN Z, et al. Sparse-view X-ray CT reconstruction via total generalized variation regularization[J]. Physics in Medicine and Biology, 2014, 59(12): 2997. doi: 10.1088/0031-9155/59/12/2997
[29] LIU Y, LIANG Z, MA J, et al. Total variation-stokes strategy for sparse-view X-ray CT image reconstruction[J]. IEEE Transactions on Medical Imaging, 2013, 33(3): 749−763.
[30] LIANG K, YANG H, XING Y. Comparison of projection domain, image domain, and comprehensive deep learning for sparse-view X-ray CT image reconstruction[J]. arXiv preprint arXiv: 1804.04289, 2018.
[31] PELT D M, BATENBURG K J. Improving filtered backprojection reconstruction by data-dependent filtering[J]. IEEE Transactions on Image Processing, 2014, 23(11): 4750−4762. doi: 10.1109/TIP.2014.2341971
[32] WANG B, LIU H. FBP-Net for direct reconstruction of dynamic PET images[J]. Physics in Medicine & Biology, 2020, 65(23): 235008.
[33] ZHANG Q, LIANG D. Visualization of fully connected layer weights in deep learning CT reconstruction[J]. arXiv preprint arXiv: 2002.06788, 2020.
[34] ZHU B, LIU J Z, CAULEY S F, et al. Image reconstruction by domain-transform manifold learning[J]. Nature, 2018, 555(7697): 487−492. doi: 10.1038/nature25988
[35] FU L, DE MAN B. A hierarchical approach to deep learning and its application to tomographic reconstruction[C]//International Society for Optics and Photonics. 15th International Meeting on Fully Three-Dimensional Image Reconst- ruction in Radiology and Nuclear Medicine, 2019, 11072: 1107202.
[36] YE D H, BUZZARD G T, RUBY M, et al. Deep back projection for sparse-view CT reconstruction[C]//2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP). IEEE, 2018: 1-5.
[37] TAO X, ZHANG H, WANG Y, et al. VVBP-tensor in the FBP algorithm: Its properties and application in low-dose CT reconstruction[J]. IEEE transactions on medical imaging, 2019, 39(3): 764−776.
[38] TAO X, WANG Y, LIN L, et al. Learning to reconstruct CT images from the VVBP-tensor[J]. IEEE Transactions on Medical Imaging, 2021.
[39] LI Y, LI K, ZHANG C, et al. Learning to reconstruct computed tomography images directly from sinogram data under a variety of data acquisition conditions[J]. IEEE Transactions on Medical Imaging, 2019, 38(10): 2469−2481. doi: 10.1109/TMI.2019.2910760
[40] HE J, WANG Y, MA J. Radon inversion via deep learning[J]. IEEE Transactions on Medical Imaging, 2020, 39(6): 2076−2087. doi: 10.1109/TMI.2020.2964266
[41] WU W, HU D, NIU C, et al. DRONE: Dual-domain residual-based optimization network for sparse-view CT reconstruction[J]. IEEE Transactions on Medical Imaging, 2021.
[42] GE Y, SU T, ZHU J, et al. ADAPTIVE-NET: Deep computed tomography reconstruction network with analytical domain transformation knowledge[J]. Quantitative Imaging in Medicine and Surgery, 2020, 10(2): 415. doi: 10.21037/qims.2019.12.12
[43] ZHANG Q, HU Z, JIANG C, et al. Artifact removal using a hybrid-domain convolutional neural network for limited-angle computed tomography imaging[J]. Physics in Medicine & Biology, 2020, 65(15): 155010.
[44] LIN W A, LIAO H, PENG C, et al. Dudonet: Dual domain network for CT metal artifact reduction[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 10512-10521.
[45] WANG T, XIA W, HUANG Y, et al. DAN-Net: Dual-domain adaptive-scaling non-local network for CT metal artifact reduction[J]. arXiv Preprint arXiv: 2102.08003, 2021.
[46] WANG T, XIA W, LU Z, et al. IDOL-Net: An interactive dual-domain parallel network for CT metal artifact reduction[J]. arXiv Preprint arXiv: 2104.01405, 2021.
[47] RAN M, XIA W, HUANG Y, et al. Md-recon-net: A parallel dual-domain convolutional neural network for compressed sensing MRI[J]. IEEE Transactions on Radiation and Plasma Medical Sciences, 2020, 5(1): 120−135.
[48] CHEN H, ZHANG Y, CHEN Y, et al. LEARN: Learned experts'assessment-based reconstruction network for sparse-data CT[J]. IEEE Transactions on Medical Imaging, 2018, 37(6): 1333−1347. doi: 10.1109/TMI.2018.2805692
[49] WU D, KIM K, El Fakhri G, et al. Iterative low-dose CT reconstruction with priors trained by artificial neural network[J]. IEEE Transactions on Medical Imaging, 2017, 36(12): 2479−2486. doi: 10.1109/TMI.2017.2753138
[50] KANG E, CHANG W, YOO J, et al. Deep convolutional framelet denosing for low-dose CT via wavelet residual network[J]. IEEE Transactions on Medical Imaging, 2018, 37(6): 1358−1369. doi: 10.1109/TMI.2018.2823756
[51] GAO Y, LIANG Z, MOORE W, et al. A feasibility study of extracting tissue textures from a previous full-dose CT database as prior knowledge for Bayesian reconstruction of current low-dose CT images[J]. IEEE Transactions on Medical Imaging, 2019, 38(8): 1981−1992. doi: 10.1109/TMI.2018.2890788
[52] VENKATAKRISHNAN S V, BOUMAN C A, WOHLBERG B. Plug-and-play priors for model based reconstruction[C]//2013 IEEE Global Conference on Signal and Information Processing. IEEE, 2013: 945-948.
[53] SREEHARI S, VENKATAKRISHNAN S V, WOHLBERG B, et al. Plug-and-play priors for bright field electron tomography and sparse interpolation[J]. IEEE Transactions on Computational Imaging, 2016, 2(4): 408−423.
[54] CASCARANO P, PICCOLOMINI E L, MOROTTI E, et al. Plug-and-play external and internal priors for image restoration[J]. arXiv e-prints, 2021, arXiv: 2102.07510.
[55] ADLER J, OKTEM O. Learned Primal-dual reconstruction[J]. IEEE Transactions on Medical Imaging, 2017: 1322−1332.
[56] XIA W, LU Z, HUANG Y, et al. MAGIC: Manifold and graph integrative convolutional network for low-dose CT reconstruction[J]. IEEE Transactions on Medical Imaging, 2021.
[57] CHEN G, HONG X, DING Q, et al. AirNet: Fused analytical and iterative reconstruction with deep neural network regularization for sparse-data CT[J]. Medical Physics, 2020, 47(7): 2916−2930. doi: 10.1002/mp.14170
[58] GUPTA H, JIN K H, NGUYEN H Q, et al. CNN-based projected gradient descent for consistent CT image reconstruction[J]. IEEE Transactions on Medical Imaging, 2018, 37(6): 1440−1453. doi: 10.1109/TMI.2018.2832656
[59] ZHANG H, LIU B, YU H, et al. MetaInv-net: Meta inversion network for sparse view CT image reconstruction[J]. IEEE Transactions on Medical Imaging, 2020, 40(2): 621−634.
[60] BUADES A, COLL B, MOREL J M. A non-local algorithm for image denoising[C]//2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05). IEEE, 2005, 2: 60-65.
[61] CHEN Y, YANG Z, HU Y, et al. Thoracic low-dose CT image processing using an artifact suppressed large-scale nonlocal means[J]. Physics in Medicine & Biology, 2012, 57(9): 2667.
[62] CHEN Y, CHEN W, YIN X, et al. Improving low-dose abdominal CT images by weighted intensity averaging over large-scale neighborhoods[J]. European Journal of Radiology, 2011, 80(2): e42−e49. doi: 10.1016/j.ejrad.2010.07.003
[63] ZHONG J, NING R, CONOVER D. Image denoising based on multiscale singularity detection for cone beam CT breast imaging[J]. IEEE Transactions on Medical Imaging, 2004, 23(6): 696−703. doi: 10.1109/TMI.2004.826944
[64] FERUGLIO P F, VINEGONI C, GROS J, et al. Block matching 3D random noise filtering for absorption optical projection tomography[J]. Physics in Medicine & Biology, 2010, 55(18): 5401.
[65] CHEN Y, SHI L, FENG Q, et al. Artifact suppressed dictionary learning for low-dose CT image processing[J]. IEEE Transactions on Medical Imaging, 2014, 33(12): 2271−2292. doi: 10.1109/TMI.2014.2336860
[66] ZAMYATIN A, KATSEVICH G, KRYLOV R, et al. Adaptive multi-scale total variation minimization filter for low dose CT imaging[C]//SPIE Medical Imaging. International Society for Optics and Photonics, 2014: 903426-903426-7.
[67] MENG B, JIANG H, LIU Z, et al. Curvelet-based bilinear interpolation method for low-dose CT[C]//International Society for Optics and Photonics. Fifth International Conference on Digital Image Processing, 2013: 88783X-88783X-5.
[68] CHEN H, ZHANG Y, ZHANG W H. Low dose CT via convolutional neural network[J]. Biomedical Optics Express, 2017, 8(2): 679−694. DOI: 10.1364/BOE.8.000679.
[69] SHAN H M, ZHANG Y, YANG Q S, et al. 3-D convolutional encoder-decoder network for low-dose CT via transfer learning from a 2-D trained network[J]. IEEE Transactions on Medical Imaging, 2018, 37(6): 1522−1534. DOI: 10.1109/TMI.2018.2832217.
[70] YANG Q S, YAN P K, ZHANG Y B, et al. Low-dose CT image denoising using a generative adversarial network with wasserstein distance and perceptual loss[J]. IEEE Transactions on Medical Imaging, 2018, 36(7): 1348−1357. DOI: 10.1109/TMI.2018.2827462.
[71] ZHU J Y, PARK T, ISOLA P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//Proceedings of the IEEE international conference on computer vision. 2017: 2223-2232.
[72] CHEN Y. CT-LSTM: Detection & estimation duplexed system for robust object tracking[C]//Proceedings of the 2nd International Conference on Computer Science and Application Engineering, 2018: 1-7.
[73] BURGER H C, SCHULER C J, HARMELING S. Image denoising: Can plain neural networks compete with BM3D?[C]//2012 IEEE conference on computer vision and pattern recognition. IEEE, 2012: 2392-2399.
[74] CHEN H, ZHANG Y, KALRA M, et al. Low-dose CT with a residual encoder-decoder convolutional neural network[J]. IEEE Transactions on Medical Imaging, 2017, 2(3): 2524−2535. DOI: 10.1109/TMI.2017.2715284.
[75] HEINRICH M P, STILLE M, BUZUG T M. Residual U-net convolutional neural network architecture for low-dose CT denoising[J]. Current Directions in Biomedical Engineering, 2018, 4(1): 297−300. doi: 10.1515/cdbme-2018-0072
[76] WU D F, KIM K, FAKHRI G E, et al. A cascaded convolutional neural network for X-ray low-dose CT image denoising[OL]. https://arxiv.org/abs/1705.04267, 2017.5.
[77] SHAN H, PADOLE A, HOMAYOUNIEH F, et al. Competitive performance of a modularized deep neural network compared to commercial algorithms for low-dose CT image reconstruction[J]. Nature Machine Intelligence, 2019, 1(6): 269−276. doi: 10.1038/s42256-019-0057-9
[78] WOLTERINK J M, LEINER T, VIERGEVER M A, et al. Generative adversarial networks for noise reduction in low-dose CT[J]. IEEE Transactions on Medical Imaging, 2017, 36(12): 2536−2545. DOI: 10.1109/TMI.2017.2708987.
[79] YI X, BABYN P. Sharpness-aware low-dose CT denoising using conditional generative adversarial network[J]. Journal of Digital Imaging, 2018, 31(5): 655−669. doi: 10.1007/s10278-018-0056-0
[80] ISOLA P, ZHU J Y, ZHOU T, et al. Image-to-image translation with conditional adversarial networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 1125-1134.
[81] YANG L, SHANGGUAN H, ZHANG X, et al. High-frequency sensitive generative adversarial network for low-dose CT image denoising[J]. IEEE Access, 2019, 8: 930−943.
[82] YOU C, YANG L, ZHANG Y, et al. Low-dose CT via deep CNN with skip connection and network-in- network[C]//Developments in X-ray Tomography XII. International Society for Optics and Photonics, 2019, 11113: 111131W.
[83] ARJOVSKY M, CHINTALA S, BOTTOU L. Wasserstein gan[J]. arXiv Preprint arXiv: 1701.07875, 2017.
[84] LI X, YE C, YAN Y, et al. Low-dose CT image denoising based on improved WGAN-gp[J]. Journal of New Media, 2019, 1(2): 75. doi: 10.32604/jnm.2019.06259
[85] MAO X, LI Q, XIE H, et al. Least squares generative adversarial networks[C]//Proceedings of the IEEE International Conference on Computer Vision, 2017: 2794-2802.
[86] MIYATO T, KATAOKA T, KOYAMA M, et al. Spectral normalization for generative adversarial networks[J]. arXiv Preprint arXiv: 1802.05957, 2018.
[87] PARK H S, BAEK J, YOU S K, et al. Unpaired image denoising using a generative adversarial network in X-ray CT[J]. IEEE Access, 2019, 7: 110414−110425. doi: 10.1109/ACCESS.2019.2934178
[88] DU W, CHEN H, LIAO P, et al. Visual attention network for low-dose CT[J]. IEEE Signal Processing Letters, 2019, 26(8): 1152−1156. doi: 10.1109/LSP.2019.2922851
[89] RAJEEV R, SAMATH J A, KARTHIKEYAN N K. An intelligent recurrent neural network with long short-term memory (LSTM) BASED batch normalization for medical image denoising[J]. Journal of Medical Systems, 2019, 43(8): 234. doi: 10.1007/s10916-019-1371-9
[90] YANG Q, YAN P, KALRA M K, et al. CT image denoising with perceptive deep neural networks[J]. arXiv Preprint arXiv: 1702.07019, 2017.
[91] YOU C, YANG Q, SHAN H, et al. Structurally-sensitive multi-scale deep neural network for low-dose CT denoising[J]. IEEE Access, 2018, 6: 41839−41855. doi: 10.1109/ACCESS.2018.2858196
[92] GU J, YE J C. AdaIN-based tunable cycleGAN for efficient unsupervised low-dose CT denoising[J]. IEEE Transactions on Computational Imaging, 2021, 7: 73−85. doi: 10.1109/TCI.2021.3050266
[93] KWON T, YE J C. Cycle-free cycleGAN using invertible generator for unsupervised low-dose CT denoising[J]. arXiv Preprint arXiv: 2104.08538, 2021.
[94] LIAO H, LIN W A, ZHOU S K, et al. Adn: Artifact disentanglement network for unsupervised metal artifact reduction[J]. IEEE Transactions on Medical Imaging, 2019, 39(3): 634−643.
[95] KO Y, MOON S, BAEK J, et al. Rigid and non-rigid motion artifact reduction in X-ray CT using attention module[J]. Medical Image Analysis, 2021, 67: 101883. doi: 10.1016/j.media.2020.101883
[96] LI M, HSU W, XIE X, et al. SACNN: Self-attention convolutional neural network for low-dose CT denoising with self-supervised perceptual loss network[J]. IEEE Transactions on Medical Imaging, 2020, 39(7): 2289−2301. doi: 10.1109/TMI.2020.2968472
[97] ZHANG H, PATEL V M. Density-aware single image de-raining using a multi-stream dense network[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018: 695-704.
[98] HU X, FU C W, ZHU L, et al. Depth-attentional features for single-image rain removal[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 8022-8031.
[99] GUO S, YAN Z, ZHANG K, et al. Toward convolutional blind denoising of real photographs[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 1712-1722.
[100] WANG Y, GONG D, YANG J, et al. An effective two-branch model-based deep network for single image deraining[J]. arXiv Preprint arXiv: 1905.05404, 2019.
[101] WANG Y, ZHANG H, LIU Y, et al. Gradient information guided deraining with a novel network and adversarial training[J]. arXiv Preprint arXiv: 1910.03839, 2019.
[102] MCCOLLOUGH C L, CHEN B, HOLMES D, et al. Low dose CT image and projection data (LDCT-and-Projection-data)[DB/OL]. The Cancer Imaging Archive, 2020. (2020-00-00) [2021-05-20]. https://doi.org/10.7937/9npb-2637.
[103] The American Association of Physicists in Medicine (AAPM). Low dose CT grand challenge[DB/OL]. (2017-00-00) [2021-05-20]. http://www.aapm.org/GrandChallenge/LowDoseCT/.
[104] National biomedical imaging archive[DB/OL]. (2021-00-00) [2021-05-20]. NCIP/national-biomedical-image-archive.
[105] CLARK K, VENDT B, SMITH K, et al. The cancer imaging archive (TCIA) maintaining and operating a public information repository[J]. Journal of Digital Imaging, 2013, 26(6): 1045-1057.
[106] LINGLE W, ERICKSON B, ZULEY M, et al. Radiology data from the cancer genome atlas breast invasive carcinoma [TCGA-BRCA] collection[DB/OL]. The Cancer Imaging Archive, (2016-00-00) [2021-05-20]. http://wiki.cancerimagingarchive.net/.
[107] YI X. Piglet Dataset[DB/OL]. (2019-00-00)[2021-05-20] http://homepage.usask.ca/~xiy525/.