ISSN 1004-4140
CN 11-3017/P

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于单步神经网络的牙根与下颌管关系自动检测

周炎锜 戴修斌 王东苗 朱书进 冒添逸

周炎锜, 戴修斌, 王东苗, 等. 基于单步神经网络的牙根与下颌管关系自动检测[J]. CT理论与应用研究, 2023, 32(2): 198-208. DOI: 10.15953/j.ctta.2022.083
引用本文: 周炎锜, 戴修斌, 王东苗, 等. 基于单步神经网络的牙根与下颌管关系自动检测[J]. CT理论与应用研究, 2023, 32(2): 198-208. DOI: 10.15953/j.ctta.2022.083
ZHOU Y Q, DAI X B, WANG D M, et al. Automatic Identification of Relationship between Tooth Root and Mandibular Canal Based on One Step Deep Neural Network[J]. CT Theory and Applications, 2023, 32(2): 198-208. DOI: 10.15953/j.ctta.2022.083. (in Chinese)
Citation: ZHOU Y Q, DAI X B, WANG D M, et al. Automatic Identification of Relationship between Tooth Root and Mandibular Canal Based on One Step Deep Neural Network[J]. CT Theory and Applications, 2023, 32(2): 198-208. DOI: 10.15953/j.ctta.2022.083. (in Chinese)

基于单步神经网络的牙根与下颌管关系自动检测

doi: 10.15953/j.ctta.2022.083
基金项目: 江苏省自然科学基金(基于多尺度细粒度网络和二值自编码模型的病理图像快速检索研究(BK20200745));江苏省卫生健康委面上项目(基于深度学习模型精准评估下颌智齿拔除术并发下牙槽神经损伤的临床研究(M2020021));江苏省高等学校自然科学研究项目(基于动态变尺度栈式二值自编码的病理图像实时检索研究(20KJB510022))。
详细信息
    作者简介:

    周炎锜:男,南京邮电大学地理与生物信息学院硕士研究生,主要从事医学图像处理,E-mail:1020173024@njupt.edu.cn

    戴修斌:男,南京邮电大学地理与生物信息学院副教授、研究生导师,主要从事医学图像处理,E-mail:daixb@njupt.edu.cn

    通讯作者:

    戴修斌*,

  • 中图分类号: O  242;R  814

Automatic Identification of Relationship between Tooth Root and Mandibular Canal Based on One Step Deep Neural Network

  • 摘要: 为了提高曲面体层片中下颌阻生智齿牙根与下颌管位置关系的识别精度和效率,提出一种基于深度卷积神经网络的自动检测方法。该方法将下颌阻生智齿牙根与下颌管位置关系的自动检测视为回归任务与分类任务的结合,以YOLOv5网络为框架构建可同时完成分类和定位任务的深度卷积神经网络,将对应锥形束CT图像中获取的空间位置关系信息作为分类金标准,训练其学习曲面体层片图像特征与接触下颌管的智齿牙根之间的非线性关系。将新获得的曲面体层片输入到训练好的网络模型后,即可获得该曲面体层片下颌阻生智齿牙根与下颌管相互接触的概率值,同时预测出存在牙根与下颌管相互接触情况的区域。实验结果表明,本文方法能准确地判断出下颌阻生智齿牙根与下颌管是否接触,并能预测出存在牙根与下颌管相互接触情况的区域;与人工判读和其他方法相比,能获得更准确的检测结果。

     

  • 图  1  本文所提方法的流程图

    Figure  1.  The pipeline of the proposed method

    图  2  本文所用网络结构图

    Figure  2.  The structure of the network used in this paper

    图  3  FPN+PAN结构示意图

    Figure  3.  FPN+PAN structure

    图  4  使用不同模型获得结果对应ROC曲线的对比

    Figure  4.  The comparison of corresponding ROC curves of the results obtained using the different models

    图  5  使用本文方法所得结果对应的PR曲线

    Figure  5.  The PR curves corresponding to the results obtained using the method in this paper

    图  6  使用本文方法得到的输出结果

    每一行左侧图像均为原始图像,右侧是本文方法输出结果图。结果图中的棕色字符contact代表下颌阻生智齿牙根与下颌管的位置关系是接触状态,蓝色字符contactless代表两者没有接触;棕色数字是预测为接触的置信度,蓝色数字是预测为不接触的置信度。

    Figure  6.  Demonstration of the output results obtained from the proposed method in this paper

    图  7  待检测物体所在区域被不同方向移动后图像的部分测试结果

    Figure  7.  Some examples of predicted results, after the regions including the objects to be detected in panoramic images, have been shifted in different directions

    表  1  Backbone网络涉及的主要参数

    Table  1.   The main parameters of the backbone network

    模块名称 数量卷积核尺寸步长输入尺寸输出尺寸
      Conv1 803⊆32   608⊆608⊆3   304⊆304⊆80
      Conv1 1603⊆32   304⊆304⊆80   152⊆152⊆160
      CSP1_44 160   152⊆152⊆160   152⊆152⊆160
      Conv1 3203⊆32   152⊆152⊆160   76⊆76⊆320
      CSP1_88 320   76⊆76⊆320   76⊆76⊆320
      Conv1 6403⊆32   76⊆76⊆320   38⊆38⊆640
      CSP1_1212 640   38⊆38⊆640   38⊆38⊆640
      Conv112803⊆32   38⊆38⊆640   19⊆19⊆1280
      CSP1_441280   19⊆19⊆1280   19⊆19⊆1280
      SPPF11280   19⊆19⊆1280   19⊆19⊆1280
    下载: 导出CSV

    表  2  本文方法与其他方法及人工判读所得预测结果对应的分类性能评价指标的对比

    Table  2.   The comparison of classification performance for the proposed method, manual diagnosis, and the other models

    方法准确率灵敏度特异度精确度
         人工判读0.8450.7410.8920.759
         AlexNet0.7780.5060.9190.764
         GoogLeNet0.7700.4340.9440.800
         VGG-160.7370.4220.9000.686
         ResNet-500.8310.6630.9190.809
         本文方法0.8810.8190.9130.829
    下载: 导出CSV

    表  3  部分概率阈值对应的分类性能评价指标值

    Table  3.   The measurements of classification performance for different thresholds

    概率阈值准确率灵敏度特异度精确度
    0.600.8810.8190.9130.829
    0.650.8810.7950.9250.846
    0.700.8680.7470.9310.849
    0.750.8600.6990.9440.866
    下载: 导出CSV

    表  4  使用本文方法时,不同训练迭代次数、批大小、学习率以及优化器参数对应的各分类性能指标值

    Table  4.   The measurements of classification performance for different iterations, epochs, learning rates, and parameters of the optimizer in the proposed method

    参数名称参数取值准确率灵敏度特异度精确度
      训练迭代次数 8000.8770.8070.9130.827
    12000.8810.8190.9130.829
    16000.8350.6140.950.864
      批大小 40.8400.6020.9630.893
    60.8810.8190.9130.829
      学习率 0.00220.8890.7590.9560.900
    0.00320.8810.8190.9130.829
    0.00420.8230.5900.9440.845
      优化器参数β1β2=0.999)0.7430.8680.7230.9440.870
    0.8430.8810.8190.9130.829
    0.9430.8640.7470.9250.838
      优化器参数β2β1=0.843)0.9 0.8560.6750.9500.875
    0.99 0.8680.7590.9250.840
    0.9990.8810.8190.9130.829
    下载: 导出CSV

    表  5  本文所用深度网络与其他模型涉及参数数量的对比

    Table  5.   The comparison of the number of parameters used in our network and the others

    网络模型AlexNetGoogLeNetVGG-16ResNet-50本文方法
    参数量/M61.07.0138.425.587.3
    下载: 导出CSV

    表  6  待检测物体所在区域朝不同方向移动前和移动后,本文方法对测试图像所得预测结果的分类性能评价指标对比

    Table  6.   The comparison of classification performance for the predicted results between the cases with or without the regions including the targets shifted in different directions

    方法准确度灵敏度特异度精确度
    移动前0.8810.8190.9130.829
    移动后0.8640.7230.9380.857
    下载: 导出CSV
  • [1] 王东苗, 金致纯, 丁旭, 等. 锥形束CT评估下颌阻生智齿拔除术后下牙槽神经损伤的风险[J]. 南京医科大学学报(自然科学版), 2016,36(10): 1263−1266.
    [2] WANG D M, HE X T, WANG Y L, et al. Topographic relationship between root apex of mesially and horizontally impacted mandibular third molar and lingual plate: Cross-sectional analysis using CBCT[J]. Scientific Reports, 2016, 6(1): 39268−39278. doi: 10.1038/srep39268
    [3] EKERT T, KROIS J, MEINHOLD L, et al. Deep learning for the radiographic detection of apical lesions[J]. Journal of Endodontics, 2019, 45(7): 917−922. doi: 10.1016/j.joen.2019.03.016
    [4] CHANG H J, LEE S J, YONG T H, et al. Deep learning hybrid method to automatically diagnose periodontal bone loss and stage periodontitis[J]. Scientific Reports, 2020, 10(1): 7531−7539. doi: 10.1038/s41598-020-64509-z
    [5] ARIJI Y, YANASHITA Y, KUTSUNA S, et al. Automatic detection and classification of radiolucent lesions in the mandible on panoramic radiographs using a deep learning object detection technique[J]. Oral Surgery, Oral Medicine, Oral Pathology and Oral Radiology, 2019, 128(4): 424−430. doi: 10.1016/j.oooo.2019.05.014
    [6] VINAYAHALINGAM S, TONG X, BERGÉ S, et al. Automated detection of third molars and mandibular nerve by deep learning[J]. Scientific Reports, 2019, 9(1): 9007−9014. doi: 10.1038/s41598-019-45487-3
    [7] LEE J, KIM D, JEONG S. Diagnosis of cystic lesions using panoramic and cone beam computed tomographic images based on deep learning neural network[J]. Oral Diseases, 2020, 26(1): 152−158. doi: 10.1111/odi.13223
    [8] FUKUDA M, ARIJI Y, KISE Y, et al. Comparison of 3 deep learning neural networks for classifying the relationship between the mandibular third molar and the mandibular canal on panoramic radiographs[J]. Oral Surgery, Oral Medicine, Oral Pathology and Oral Radiology, 2020, 130(3): 336−343. doi: 10.1016/j.oooo.2020.04.005
    [9] CHOI E, LEE S, JEONG E, et al. Artificial intelligence in positioning between mandibular third molar and inferior alveolar nerve on panoramic radiography[J]. Scientific Reports, 2022, 12(1): 2456−2463. doi: 10.1038/s41598-022-06483-2
    [10] ZWA B, LJA B, SHUAI W. Apple stem/calyx real-time recognition using YOLO-v5 algorithm for fruit automatic loading system[J]. Postharvest Biology and Technology, 2022, 185(2): 111808−111815.
    [11] BOCHKOVSKIY A, WANG C Y, LIAO H. YOLOv4: Optimal speed and accuracy of object detection[J]. arXiv Preprint, 2020, arXiv: 2004.10934v1.
    [12] LIN T Y, DOLLAR P, GIRSHICK R, et al. Feature pyramid networks for object detection[C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017: 2117-2125.
    [13] LI H, XIONG P, AN J, et al. Pyramid attention network for semantic segmentation[J]. arXiv Preprint, 2018, arXiv:1805.10180.
    [14] REZATOFIGHI H, TSOI N, GWAK J Y. Generalized Intersection over Union: A metric and a loss for bounding box regression[C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2019: 658-666.
    [15] COOK N R. Use and misuse of the receiver operating characteristic curve in risk prediction[J]. Circulation, 2007, 115(7): 928−35. doi: 10.1161/CIRCULATIONAHA.106.672402
    [16] BUCKLAND M K, GEY F C. The relationship between recall and precision[J]. Journal of the Association for Information Science & Technology, 2010, 45(1): 12−19.
  • 加载中
图(7) / 表(6)
计量
  • 文章访问数:  79
  • HTML全文浏览量:  35
  • PDF下载量:  12
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-05-07
  • 修回日期:  2022-09-25
  • 录用日期:  2022-10-27
  • 网络出版日期:  2022-11-04
  • 刊出日期:  2023-03-31

目录

    /

    返回文章
    返回