1
|
Kim JH, Noh Y, Lee H, Lee S, Kim WR, Kang KM, Kim EY, Al-Masni MA, Kim DH. Toward automated detection of microbleeds with anatomical scale localization using deep learning. Med Image Anal 2025; 101:103415. [PMID: 39642804 DOI: 10.1016/j.media.2024.103415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 11/21/2024] [Accepted: 11/25/2024] [Indexed: 12/09/2024]
Abstract
Cerebral Microbleeds (CMBs) are chronic deposits of small blood products in the brain tissues, which have explicit relation to various cerebrovascular diseases depending on their anatomical location, including cognitive decline, intracerebral hemorrhage, and cerebral infarction. However, manual detection of CMBs is a time consuming and error-prone process because of their sparse and tiny structural properties. The detection of CMBs is commonly affected by the presence of many CMB mimics that cause a high false-positive rate (FPR), such as calcifications and pial vessels. This paper proposes a novel 3D deep learning framework that not only detects CMBs but also identifies their anatomical location in the brain (i.e., lobar, deep, and infratentorial regions). For the CMBs detection task, we propose a single end-to-end model by leveraging the 3D U-Net as a backbone with Region Proposal Network (RPN). To significantly reduce the false positives within the same single model, we develop a new scheme, containing Feature Fusion Module (FFM) that detects small candidates utilizing contextual information and Hard Sample Prototype Learning (HSPL) that mines CMB mimics and generates additional loss term called concentration loss using Convolutional Prototype Learning (CPL). For the anatomical localization task, we exploit the 3D U-Net segmentation network to segment anatomical structures of the brain. This task not only identifies to which region the CMBs belong but also eliminates some false positives from the detection task by leveraging anatomical information. We utilize Susceptibility-Weighted Imaging (SWI) and phase images as 3D input to efficiently capture 3D information. The results show that the proposed RPN that utilizes the FFM and HSPL outperforms the baseline RPN and achieves a sensitivity of 94.66 % vs. 93.33 % and an average number of false positives per subject (FPavg) of 0.86 vs. 14.73. Furthermore, the anatomical localization task enhances the detection performance by reducing the FPavg to 0.56 while maintaining the sensitivity of 94.66 %.
Collapse
Affiliation(s)
- Jun-Ho Kim
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul, Republic of Korea
| | - Young Noh
- Neuroscience Research Institute, Gachon University, Incheon, Republic of Korea; Department of Neurology, Gachon University College of Medicine, Gil Medical Center, Incheon, Republic of Korea
| | - Haejoon Lee
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Seul Lee
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul, Republic of Korea
| | - Woo-Ram Kim
- Neuroscience Research Institute, Gachon University, Incheon, Republic of Korea
| | - Koung Mi Kang
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea; Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Eung Yeop Kim
- Department of Radiology, Gachon University College of Medicine, Gil Medical Center, Incheon, Republic of Korea
| | - Mohammed A Al-Masni
- Department of Artificial Intelligence and Data Science, College of AI Convergence, Sejong University, Seoul 05006, Republic of Korea.
| | - Dong-Hyun Kim
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul, Republic of Korea.
| |
Collapse
|
2
|
Șerbănescu MS, Streba L, Demetrian AD, Gheorghe AG, Mămuleanu M, Pirici DN, Streba CT. Transfer Learning-Based Integration of Dual Imaging Modalities for Enhanced Classification Accuracy in Confocal Laser Endomicroscopy of Lung Cancer. Cancers (Basel) 2025; 17:611. [PMID: 40002206 PMCID: PMC11852907 DOI: 10.3390/cancers17040611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2024] [Revised: 02/06/2025] [Accepted: 02/07/2025] [Indexed: 02/27/2025] Open
Abstract
BACKGROUND/OBJECTIVES Lung cancer remains the leading cause of cancer-related mortality, underscoring the need for improved diagnostic methods. This study seeks to enhance the classification accuracy of confocal laser endomicroscopy (pCLE) images for lung cancer by applying a dual transfer learning (TL) approach that incorporates histological imaging data. METHODS Histological samples and pCLE images, collected from 40 patients undergoing curative lung cancer surgeries, were selected to create 2 balanced datasets (800 benign and 800 malignant images each). Three CNN architectures-AlexNet, GoogLeNet, and ResNet-were pre-trained on ImageNet and re-trained on pCLE images (confocal TL) or using dual TL (first re-trained on histological images, then pCLE). Model performance was evaluated using accuracy and AUC across 50 independent runs with 10-fold cross-validation. RESULTS The dual TL approach statistically significant outperformed confocal TL, with AlexNet achieving a mean accuracy of 94.97% and an AUC of 0.98, surpassing GoogLeNet (91.43% accuracy, 0.97 AUC) and ResNet (89.87% accuracy, 0.96 AUC). All networks demonstrated statistically significant (p < 0.001) improvements in performance with dual TL. Additionally, dual TL models showed reductions in both false positives and false negatives, with class activation mappings highlighting enhanced focus on diagnostically relevant regions. CONCLUSIONS Dual TL, integrating histological and pCLE imaging, results in a statistically significant improvement in lung cancer classification. This approach offers a promising framework for enhanced tissue classification. and with future development and testing, iy has the potential to improve patient outcomes.
Collapse
Affiliation(s)
- Mircea-Sebastian Șerbănescu
- Department of Medical Informatics and Statistics, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | - Liliana Streba
- Department of Oncology and Palliative Care, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | - Alin Dragoș Demetrian
- Department of Thoracic Surgery, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | | | - Mădălin Mămuleanu
- Department of Automatic Control and Electronics, University of Craiova, 200585 Craiova, Romania
| | - Daniel-Nicolae Pirici
- Department of Histology, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | - Costin-Teodor Streba
- Department of Pulmonology, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| |
Collapse
|
3
|
Mahajan A, Agarwal R, Agarwal U, Ashtekar RM, Komaravolu B, Madiraju A, Vaish R, Pawar V, Punia V, Patil VM, Noronha V, Joshi A, Menon N, Prabhash K, Chaturvedi P, Rane S, Banwar P, Gupta S. A Novel Deep Learning-Based (3D U-Net Model) Automated Pulmonary Nodule Detection Tool for CT Imaging. Curr Oncol 2025; 32:95. [PMID: 39996895 PMCID: PMC11854842 DOI: 10.3390/curroncol32020095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2024] [Revised: 01/29/2025] [Accepted: 01/29/2025] [Indexed: 02/26/2025] Open
Abstract
BACKGROUND Precise detection and characterization of pulmonary nodules on computed tomography (CT) is crucial for early diagnosis and management. OBJECTIVES In this study, we propose the use of a deep learning-based algorithm to automatically detect pulmonary nodules in computed tomography (CT) scans. We evaluated the performance of the algorithm against the interpretation of radiologists to analyze the effectiveness of the algorithm. MATERIALS AND METHODS The study was conducted in collaboration with a tertiary cancer center. We used a collection of public (LUNA) and private (tertiary cancer center) datasets to train our deep learning models. The sensitivity, the number of false positives per scan, and the FROC curve along with the CPM score were used to assess the performance of the deep learning algorithm by comparing the deep learning algorithm and the radiology predictions. RESULTS We evaluated 491 scans consisting of 5669 pulmonary nodules annotated by a radiologist from our hospital; our algorithm showed a sensitivity of 90% and with only 0.3 false positives per scan with a CPM score of 0.85. Apart from the nodule-wise performance, we also assessed the algorithm for the detection of patients containing true nodules where it achieved a sensitivity of 0.95 and specificity of 1.0 over 491 scans in the test cohort. CONCLUSIONS Our multi-institutional validated deep learning-based algorithm can aid radiologists in confirming the detection of pulmonary nodules through computed tomography (CT) scans and identifying further abnormalities and can be used as an assistive tool. This will be helpful in national lung screening programs guiding early diagnosis and appropriate management.
Collapse
Affiliation(s)
- Abhishek Mahajan
- Department of Imaging, The Clatterbridge Cancer Centre NHS Foundation Trust, Liverpool L7 8YA, UK
- Faculty of Health and Life Sciences, University of Liverpool, Liverpool L69 3BX, UK
| | - Rajat Agarwal
- Department of Radiodiagnosis and Imaging, Tata Memorial Hospital, Homi Bhabha National Institute, Mumbai 400012, India; (R.A.); (U.A.); (R.M.A.); (P.B.)
| | - Ujjwal Agarwal
- Department of Radiodiagnosis and Imaging, Tata Memorial Hospital, Homi Bhabha National Institute, Mumbai 400012, India; (R.A.); (U.A.); (R.M.A.); (P.B.)
| | - Renuka M. Ashtekar
- Department of Radiodiagnosis and Imaging, Tata Memorial Hospital, Homi Bhabha National Institute, Mumbai 400012, India; (R.A.); (U.A.); (R.M.A.); (P.B.)
| | - Bharadwaj Komaravolu
- Endimension Technology Pvt Ltd., Maharashtra 400076, India; (B.K.); (A.M.); (V.P.); (V.P.)
| | - Apparao Madiraju
- Endimension Technology Pvt Ltd., Maharashtra 400076, India; (B.K.); (A.M.); (V.P.); (V.P.)
| | - Richa Vaish
- Department of Surgical Oncology, Tata Memorial Hospital, Mumbai 400012, India; (R.V.); (P.C.)
| | - Vivek Pawar
- Endimension Technology Pvt Ltd., Maharashtra 400076, India; (B.K.); (A.M.); (V.P.); (V.P.)
| | - Vivek Punia
- Endimension Technology Pvt Ltd., Maharashtra 400076, India; (B.K.); (A.M.); (V.P.); (V.P.)
| | - Vijay Maruti Patil
- Department of Medical Oncology, Tata Memorial Hospital, Mumbai 400012, India; (V.M.P.); (V.N.); (A.J.); (N.M.); (K.P.); (S.G.)
| | - Vanita Noronha
- Department of Medical Oncology, Tata Memorial Hospital, Mumbai 400012, India; (V.M.P.); (V.N.); (A.J.); (N.M.); (K.P.); (S.G.)
| | - Amit Joshi
- Department of Medical Oncology, Tata Memorial Hospital, Mumbai 400012, India; (V.M.P.); (V.N.); (A.J.); (N.M.); (K.P.); (S.G.)
| | - Nandini Menon
- Department of Medical Oncology, Tata Memorial Hospital, Mumbai 400012, India; (V.M.P.); (V.N.); (A.J.); (N.M.); (K.P.); (S.G.)
| | - Kumar Prabhash
- Department of Medical Oncology, Tata Memorial Hospital, Mumbai 400012, India; (V.M.P.); (V.N.); (A.J.); (N.M.); (K.P.); (S.G.)
| | - Pankaj Chaturvedi
- Department of Surgical Oncology, Tata Memorial Hospital, Mumbai 400012, India; (R.V.); (P.C.)
| | - Swapnil Rane
- Department of Pathology, Tata Memorial Hospital, Mumbai 400012, India;
| | - Priya Banwar
- Department of Radiodiagnosis and Imaging, Tata Memorial Hospital, Homi Bhabha National Institute, Mumbai 400012, India; (R.A.); (U.A.); (R.M.A.); (P.B.)
| | - Sudeep Gupta
- Department of Medical Oncology, Tata Memorial Hospital, Mumbai 400012, India; (V.M.P.); (V.N.); (A.J.); (N.M.); (K.P.); (S.G.)
| |
Collapse
|
4
|
Chi J, Zhao J, Wang S, Yu X, Wu C. LGDNet: local feature coupling global representations network for pulmonary nodules detection. Med Biol Eng Comput 2024; 62:1991-2004. [PMID: 38429443 DOI: 10.1007/s11517-024-03043-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 02/06/2024] [Indexed: 03/03/2024]
Abstract
Detection of suspicious pulmonary nodules from lung CT scans is a crucial task in computer-aided diagnosis (CAD) systems. In recent years, various deep learning-based approaches have been proposed and demonstrated significant potential for addressing this task. However, existing deep convolutional neural networks exhibit limited long-range dependency capabilities and neglect crucial contextual information, resulting in reduced performance on detecting small-size nodules in CT scans. In this work, we propose a novel end-to-end framework called LGDNet for the detection of suspicious pulmonary nodules in lung CT scans by fusing local features and global representations. To overcome the limited long-range dependency capabilities inherent in convolutional operations, a dual-branch module is designed to integrate the convolutional neural network (CNN) branch that extracts local features with the transformer branch that captures global representations. To further address the issue of misalignment between local features and global representations, an attention gate module is proposed in the up-sampling stage to selectively combine misaligned semantic data from both branches, resulting in more accurate detection of small-size nodules. Our experiments on the large-scale LIDC dataset demonstrate that the proposed LGDNet with the dual-branch module and attention gate module could significantly improve the nodule detection sensitivity by achieving a final competition performance metric (CPM) score of 89.49%, outperforming the state-of-the-art nodule detection methods, indicating its potential for clinical applications in the early diagnosis of lung diseases.
Collapse
Affiliation(s)
- Jianning Chi
- Faculty of Robot Science and Engineering, Northeastern University, Shenyang, Liaoning, 110167, China.
- Key Laboratory of Intelligent Computing in Medical Image of Ministry of Education, Northeastern University, Shenyang, Liaoning, 110167, China.
| | - Jin Zhao
- Faculty of Robot Science and Engineering, Northeastern University, Shenyang, Liaoning, 110167, China
| | - Siqi Wang
- Faculty of Robot Science and Engineering, Northeastern University, Shenyang, Liaoning, 110167, China
| | - Xiaosheng Yu
- Faculty of Robot Science and Engineering, Northeastern University, Shenyang, Liaoning, 110167, China
| | - Chengdong Wu
- Faculty of Robot Science and Engineering, Northeastern University, Shenyang, Liaoning, 110167, China
| |
Collapse
|
5
|
Yousefirizi F, Klyuzhin IS, O JH, Harsini S, Tie X, Shiri I, Shin M, Lee C, Cho SY, Bradshaw TJ, Zaidi H, Bénard F, Sehn LH, Savage KJ, Steidl C, Uribe CF, Rahmim A. TMTV-Net: fully automated total metabolic tumor volume segmentation in lymphoma PET/CT images - a multi-center generalizability analysis. Eur J Nucl Med Mol Imaging 2024; 51:1937-1954. [PMID: 38326655 DOI: 10.1007/s00259-024-06616-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 01/15/2024] [Indexed: 02/09/2024]
Abstract
PURPOSE Total metabolic tumor volume (TMTV) segmentation has significant value enabling quantitative imaging biomarkers for lymphoma management. In this work, we tackle the challenging task of automated tumor delineation in lymphoma from PET/CT scans using a cascaded approach. METHODS Our study included 1418 2-[18F]FDG PET/CT scans from four different centers. The dataset was divided into 900 scans for development/validation/testing phases and 518 for multi-center external testing. The former consisted of 450 lymphoma, lung cancer, and melanoma scans, along with 450 negative scans, while the latter consisted of lymphoma patients from different centers with diffuse large B cell, primary mediastinal large B cell, and classic Hodgkin lymphoma cases. Our approach involves resampling PET/CT images into different voxel sizes in the first step, followed by training multi-resolution 3D U-Nets on each resampled dataset using a fivefold cross-validation scheme. The models trained on different data splits were ensemble. After applying soft voting to the predicted masks, in the second step, we input the probability-averaged predictions, along with the input imaging data, into another 3D U-Net. Models were trained with semi-supervised loss. We additionally considered the effectiveness of using test time augmentation (TTA) to improve the segmentation performance after training. In addition to quantitative analysis including Dice score (DSC) and TMTV comparisons, the qualitative evaluation was also conducted by nuclear medicine physicians. RESULTS Our cascaded soft-voting guided approach resulted in performance with an average DSC of 0.68 ± 0.12 for the internal test data from developmental dataset, and an average DSC of 0.66 ± 0.18 on the multi-site external data (n = 518), significantly outperforming (p < 0.001) state-of-the-art (SOTA) approaches including nnU-Net and SWIN UNETR. While TTA yielded enhanced performance gains for some of the comparator methods, its impact on our cascaded approach was found to be negligible (DSC: 0.66 ± 0.16). Our approach reliably quantified TMTV, with a correlation of 0.89 with the ground truth (p < 0.001). Furthermore, in terms of visual assessment, concordance between quantitative evaluations and clinician feedback was observed in the majority of cases. The average relative error (ARE) and the absolute error (AE) in TMTV prediction on external multi-centric dataset were ARE = 0.43 ± 0.54 and AE = 157.32 ± 378.12 (mL) for all the external test data (n = 518), and ARE = 0.30 ± 0.22 and AE = 82.05 ± 99.78 (mL) when the 10% outliers (n = 53) were excluded. CONCLUSION TMTV-Net demonstrates strong performance and generalizability in TMTV segmentation across multi-site external datasets, encompassing various lymphoma subtypes. A negligible reduction of 2% in overall performance during testing on external data highlights robust model generalizability across different centers and cancer types, likely attributable to its training with resampled inputs. Our model is publicly available, allowing easy multi-site evaluation and generalizability analysis on datasets from different institutions.
Collapse
Affiliation(s)
- Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10Th Avenue, Vancouver, BC, V5Z 1L3, Canada.
| | - Ivan S Klyuzhin
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10Th Avenue, Vancouver, BC, V5Z 1L3, Canada
| | - Joo Hyun O
- College of Medicine, Seoul St. Mary's Hospital, The Catholic University of Korea, Seoul, Republic of Korea
| | | | - Xin Tie
- Department of Radiology, University of WI-Madison, Madison, WI, USA
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Muheon Shin
- Department of Radiology, University of WI-Madison, Madison, WI, USA
| | - Changhee Lee
- Department of Radiology, University of WI-Madison, Madison, WI, USA
| | - Steve Y Cho
- Department of Radiology, University of WI-Madison, Madison, WI, USA
| | - Tyler J Bradshaw
- Department of Radiology, University of WI-Madison, Madison, WI, USA
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
- University Medical Center Groningen, University of Groningen, Groningen, Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
- University Research and Innovation Center, Óbuda University, Budapest, Hungary
| | - François Bénard
- BC Cancer, Vancouver, BC, Canada
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
| | - Laurie H Sehn
- BC Cancer, Vancouver, BC, Canada
- Centre for Lymphoid Cancer, BC Cancer, Vancouver, Canada
| | - Kerry J Savage
- BC Cancer, Vancouver, BC, Canada
- Centre for Lymphoid Cancer, BC Cancer, Vancouver, Canada
| | - Christian Steidl
- BC Cancer, Vancouver, BC, Canada
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, Canada
| | - Carlos F Uribe
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10Th Avenue, Vancouver, BC, V5Z 1L3, Canada
- BC Cancer, Vancouver, BC, Canada
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10Th Avenue, Vancouver, BC, V5Z 1L3, Canada
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
- Departments of Physics and Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
- Department of Biomedical Engineering, University of British Columbia, Vancouver, Canada
| |
Collapse
|
6
|
Zhao Q, Chang CW, Yang X, Zhao L. Robust explanation supervision for false positive reduction in pulmonary nodule detection. Med Phys 2024; 51:1687-1701. [PMID: 38224306 PMCID: PMC10939846 DOI: 10.1002/mp.16937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 11/08/2023] [Accepted: 12/12/2023] [Indexed: 01/16/2024] Open
Abstract
BACKGROUND Lung cancer is the deadliest and second most common cancer in the United States due to the lack of symptoms for early diagnosis. Pulmonary nodules are small abnormal regions that can be potentially correlated to the occurrence of lung cancer. Early detection of these nodules is critical because it can significantly improve the patient's survival rates. Thoracic thin-sliced computed tomography (CT) scanning has emerged as a widely used method for diagnosing and prognosis lung abnormalities. PURPOSE The standard clinical workflow of detecting pulmonary nodules relies on radiologists to analyze CT images to assess the risk factors of cancerous nodules. However, this approach can be error-prone due to the various nodule formation causes, such as pollutants and infections. Deep learning (DL) algorithms have recently demonstrated remarkable success in medical image classification and segmentation. As an ever more important assistant to radiologists in nodule detection, it is imperative ensure the DL algorithm and radiologist to better understand the decisions from each other. This study aims to develop a framework integrating explainable AI methods to achieve accurate pulmonary nodule detection. METHODS A robust and explainable detection (RXD) framework is proposed, focusing on reducing false positives in pulmonary nodule detection. Its implementation is based on an explanation supervision method, which uses nodule contours of radiologists as supervision signals to force the model to learn nodule morphologies, enabling improved learning ability on small dataset, and enable small dataset learning ability. In addition, two imputation methods are applied to the nodule region annotations to reduce the noise within human annotations and allow the model to have robust attributions that meet human expectations. The 480, 265, and 265 CT image sets from the public Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) dataset are used for training, validation, and testing. RESULTS Using only 10, 30, 50, and 100 training samples sequentially, our method constantly improves the classification performance and explanation quality of baseline in terms of Area Under the Curve (AUC) and Intersection over Union (IoU). In particular, our framework with a learnable imputation kernel improves IoU from baseline by 24.0% to 80.0%. A pre-defined Gaussian imputation kernel achieves an even greater improvement, from 38.4% to 118.8% from baseline. Compared to the baseline trained on 100 samples, our method shows less drop in AUC when trained on fewer samples. A comprehensive comparison of interpretability shows that our method aligns better with expert opinions. CONCLUSIONS A pulmonary nodule detection framework was demonstrated using public thoracic CT image datasets. The framework integrates the robust explanation supervision (RES) technique to ensure the performance of nodule classification and morphology. The method can reduce the workload of radiologists and enable them to focus on the diagnosis and prognosis of the potential cancerous pulmonary nodules at the early stage to improve the outcomes for lung cancer patients.
Collapse
Affiliation(s)
- Qilong Zhao
- Department of Computer Science, Emory University, Atlanta, GA 30308
| | - Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308
| | - Liang Zhao
- Department of Computer Science, Emory University, Atlanta, GA 30308
| |
Collapse
|
7
|
Grudza M, Salinel B, Zeien S, Murphy M, Adkins J, Jensen CT, Bay C, Kodibagkar V, Koo P, Dragovich T, Choti MA, Kundranda M, Syeda-Mahmood T, Wang HZ, Chang J. Methods for improving colorectal cancer annotation efficiency for artificial intelligence-observer training. World J Radiol 2023; 15:359-369. [PMID: 38179201 PMCID: PMC10762523 DOI: 10.4329/wjr.v15.i12.359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 11/13/2023] [Accepted: 12/05/2023] [Indexed: 12/26/2023] Open
Abstract
BACKGROUND Missing occult cancer lesions accounts for the most diagnostic errors in retrospective radiology reviews as early cancer can be small or subtle, making the lesions difficult to detect. Second-observer is the most effective technique for reducing these events and can be economically implemented with the advent of artificial intelligence (AI). AIM To achieve appropriate AI model training, a large annotated dataset is necessary to train the AI models. Our goal in this research is to compare two methods for decreasing the annotation time to establish ground truth: Skip-slice annotation and AI-initiated annotation. METHODS We developed a 2D U-Net as an AI second observer for detecting colorectal cancer (CRC) and an ensemble of 5 differently initiated 2D U-Net for ensemble technique. Each model was trained with 51 cases of annotated CRC computed tomography of the abdomen and pelvis, tested with 7 cases, and validated with 20 cases from The Cancer Imaging Archive cases. The sensitivity, false positives per case, and estimated Dice coefficient were obtained for each method of training. We compared the two methods of annotations and the time reduction associated with the technique. The time differences were tested using Friedman's two-way analysis of variance. RESULTS Sparse annotation significantly reduces the time for annotation particularly skipping 2 slices at a time (P < 0.001). Reduction of up to 2/3 of the annotation does not reduce AI model sensitivity or false positives per case. Although initializing human annotation with AI reduces the annotation time, the reduction is minimal, even when using an ensemble AI to decrease false positives. CONCLUSION Our data support the sparse annotation technique as an efficient technique for reducing the time needed to establish the ground truth.
Collapse
Affiliation(s)
- Matthew Grudza
- School of Biological Health and Systems Engineering, Arizona State University, Tempe, AZ 85287, United States
| | - Brandon Salinel
- Department of Radiology, Banner MD Anderson Cancer Center, Gilbert, AZ 85234, United States
| | - Sarah Zeien
- School of Osteopathic Medicine, A.T. Still University, Mesa, AZ 85206, United States
| | - Matthew Murphy
- School of Osteopathic Medicine, A.T. Still University, Mesa, AZ 85206, United States
| | - Jake Adkins
- Department of Abdominal Imaging, MD Anderson Cancer Center, Houston, TX 77030, United States
| | - Corey T Jensen
- Department of Abdominal Imaging, University Texas MD Anderson Cancer Center, Houston, TX 77030, United States
| | - Curtis Bay
- Department of Interdisciplinary Sciences, A.T. Still University, Mesa, AZ 85206, United States
| | - Vikram Kodibagkar
- School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ 85287, United States
| | - Phillip Koo
- Department of Radiology, Banner MD Anderson Cancer Center, Gilbert, AZ 85234, United States
| | - Tomislav Dragovich
- Division of Cancer Medicine, Banner MD Anderson Cancer Center, Gilbert, AZ 85234, United States
| | - Michael A Choti
- Department of Surgical Oncology, Banner MD Anderson Cancer Center, Gilbert, AZ 85234, United States
| | - Madappa Kundranda
- Division of Cancer Medicine, Banner MD Anderson Cancer Center, Gilbert, AZ 85234, United States
| | | | - Hong-Zhi Wang
- IBM Almaden Research Center, IBM, San Jose, CA 95120, United States
| | - John Chang
- Department of Radiology, Banner MD Anderson Cancer Center, Gilbert, AZ 85234, United States
| |
Collapse
|
8
|
Ciceri T, Squarcina L, Giubergia A, Bertoldo A, Brambilla P, Peruzzo D. Review on deep learning fetal brain segmentation from Magnetic Resonance images. Artif Intell Med 2023; 143:102608. [PMID: 37673558 DOI: 10.1016/j.artmed.2023.102608] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 05/31/2023] [Accepted: 06/06/2023] [Indexed: 09/08/2023]
Abstract
Brain segmentation is often the first and most critical step in quantitative analysis of the brain for many clinical applications, including fetal imaging. Different aspects challenge the segmentation of the fetal brain in magnetic resonance imaging (MRI), such as the non-standard position of the fetus owing to his/her movements during the examination, rapid brain development, and the limited availability of imaging data. In recent years, several segmentation methods have been proposed for automatically partitioning the fetal brain from MR images. These algorithms aim to define regions of interest with different shapes and intensities, encompassing the entire brain, or isolating specific structures. Deep learning techniques, particularly convolutional neural networks (CNNs), have become a state-of-the-art approach in the field because they can provide reliable segmentation results over heterogeneous datasets. Here, we review the deep learning algorithms developed in the field of fetal brain segmentation and categorize them according to their target structures. Finally, we discuss the perceived research gaps in the literature of the fetal domain, suggesting possible future research directions that could impact the management of fetal MR images.
Collapse
Affiliation(s)
- Tommaso Ciceri
- NeuroImaging Laboratory, Scientific Institute IRCCS Eugenio Medea, Bosisio Parini, Italy; Department of Information Engineering, University of Padua, Padua, Italy
| | - Letizia Squarcina
- Department of Pathophysiology and Transplantation, University of Milan, Milan, Italy
| | - Alice Giubergia
- NeuroImaging Laboratory, Scientific Institute IRCCS Eugenio Medea, Bosisio Parini, Italy; Department of Information Engineering, University of Padua, Padua, Italy
| | - Alessandra Bertoldo
- Department of Information Engineering, University of Padua, Padua, Italy; University of Padua, Padova Neuroscience Center, Padua, Italy
| | - Paolo Brambilla
- Department of Pathophysiology and Transplantation, University of Milan, Milan, Italy; Department of Neurosciences and Mental Health, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milan, Italy.
| | - Denis Peruzzo
- NeuroImaging Laboratory, Scientific Institute IRCCS Eugenio Medea, Bosisio Parini, Italy
| |
Collapse
|
9
|
Thanoon MA, Zulkifley MA, Mohd Zainuri MAA, Abdani SR. A Review of Deep Learning Techniques for Lung Cancer Screening and Diagnosis Based on CT Images. Diagnostics (Basel) 2023; 13:2617. [PMID: 37627876 PMCID: PMC10453592 DOI: 10.3390/diagnostics13162617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 07/26/2023] [Accepted: 08/02/2023] [Indexed: 08/27/2023] Open
Abstract
One of the most common and deadly diseases in the world is lung cancer. Only early identification of lung cancer can increase a patient's probability of survival. A frequently used modality for the screening and diagnosis of lung cancer is computed tomography (CT) imaging, which provides a detailed scan of the lung. In line with the advancement of computer-assisted systems, deep learning techniques have been extensively explored to help in interpreting the CT images for lung cancer identification. Hence, the goal of this review is to provide a detailed review of the deep learning techniques that were developed for screening and diagnosing lung cancer. This review covers an overview of deep learning (DL) techniques, the suggested DL techniques for lung cancer applications, and the novelties of the reviewed methods. This review focuses on two main methodologies of deep learning in screening and diagnosing lung cancer, which are classification and segmentation methodologies. The advantages and shortcomings of current deep learning models will also be discussed. The resultant analysis demonstrates that there is a significant potential for deep learning methods to provide precise and effective computer-assisted lung cancer screening and diagnosis using CT scans. At the end of this review, a list of potential future works regarding improving the application of deep learning is provided to spearhead the advancement of computer-assisted lung cancer diagnosis systems.
Collapse
Affiliation(s)
- Mohammad A. Thanoon
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, University Kebangsaan Malaysia, Bangi 43600, Malaysia;
- System and Control Engineering Department, College of Electronics Engineering, Ninevah University, Mosul 41002, Iraq
| | - Mohd Asyraf Zulkifley
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, University Kebangsaan Malaysia, Bangi 43600, Malaysia;
| | - Muhammad Ammirrul Atiqi Mohd Zainuri
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, University Kebangsaan Malaysia, Bangi 43600, Malaysia;
| | - Siti Raihanah Abdani
- School of Computing Sciences, College of Computing, Informatics and Media, Universiti Teknologi MARA, Shah Alam 40450, Malaysia;
| |
Collapse
|
10
|
Neural network model based on global and local features for multi-view mammogram classification. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.03.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/19/2023]
|
11
|
Maynord M, Farhangi MM, Fermüller C, Aloimonos Y, Levine G, Petrick N, Sahiner B, Pezeshk A. Semi-supervised training using cooperative labeling of weakly annotated data for nodule detection in chest CT. Med Phys 2023. [PMID: 36630691 DOI: 10.1002/mp.16219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 12/14/2022] [Accepted: 12/23/2022] [Indexed: 01/13/2023] Open
Abstract
PURPOSE Machine learning algorithms are best trained with large quantities of accurately annotated samples. While natural scene images can often be labeled relatively cheaply and at large scale, obtaining accurate annotations for medical images is both time consuming and expensive. In this study, we propose a cooperative labeling method that allows us to make use of weakly annotated medical imaging data for the training of a machine learning algorithm. As most clinically produced data are weakly-annotated - produced for use by humans rather than machines and lacking information machine learning depends upon - this approach allows us to incorporate a wider range of clinical data and thereby increase the training set size. METHODS Our pseudo-labeling method consists of multiple stages. In the first stage, a previously established network is trained using a limited number of samples with high-quality expert-produced annotations. This network is used to generate annotations for a separate larger dataset that contains only weakly annotated scans. In the second stage, by cross-checking the two types of annotations against each other, we obtain higher-fidelity annotations. In the third stage, we extract training data from the weakly annotated scans, and combine it with the fully annotated data, producing a larger training dataset. We use this larger dataset to develop a computer-aided detection (CADe) system for nodule detection in chest CT. RESULTS We evaluated the proposed approach by presenting the network with different numbers of expert-annotated scans in training and then testing the CADe using an independent expert-annotated dataset. We demonstrate that when availability of expert annotations is severely limited, the inclusion of weakly-labeled data leads to a 5% improvement in the competitive performance metric (CPM), defined as the average of sensitivities at different false-positive rates. CONCLUSIONS Our proposed approach can effectively merge a weakly-annotated dataset with a small, well-annotated dataset for algorithm training. This approach can help enlarge limited training data by leveraging the large amount of weakly labeled data typically generated in clinical image interpretation.
Collapse
Affiliation(s)
- Michael Maynord
- University of Maryland, Computer Science Department, Iribe Center for Computer Science and Engineering, College Park, Maryland, USA.,Division of Imaging, Diagnostics, and Software Reliability (DIDSR), OSEL, CDRH, FDA, Silver Spring, Maryland, USA
| | - M Mehdi Farhangi
- Division of Imaging, Diagnostics, and Software Reliability (DIDSR), OSEL, CDRH, FDA, Silver Spring, Maryland, USA
| | - Cornelia Fermüller
- University of Maryland, Institute for Advanced Computer Studies, Iribe Center for Computer Science and Engineering, College Park, Maryland, USA
| | - Yiannis Aloimonos
- University of Maryland, Computer Science Department, Iribe Center for Computer Science and Engineering, College Park, Maryland, USA
| | - Gary Levine
- Division of Radiological Imaging Devices and Electronic Products, CDRH, FDA, Silver Spring, Maryland, USA
| | - Nicholas Petrick
- Division of Imaging, Diagnostics, and Software Reliability (DIDSR), OSEL, CDRH, FDA, Silver Spring, Maryland, USA
| | - Berkman Sahiner
- Division of Imaging, Diagnostics, and Software Reliability (DIDSR), OSEL, CDRH, FDA, Silver Spring, Maryland, USA
| | - Aria Pezeshk
- Division of Imaging, Diagnostics, and Software Reliability (DIDSR), OSEL, CDRH, FDA, Silver Spring, Maryland, USA
| |
Collapse
|
12
|
Mridha MF, Prodeep AR, Hoque ASMM, Islam MR, Lima AA, Kabir MM, Hamid MA, Watanobe Y. A Comprehensive Survey on the Progress, Process, and Challenges of Lung Cancer Detection and Classification. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:5905230. [PMID: 36569180 PMCID: PMC9788902 DOI: 10.1155/2022/5905230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 10/17/2022] [Accepted: 11/09/2022] [Indexed: 12/23/2022]
Abstract
Lung cancer is the primary reason of cancer deaths worldwide, and the percentage of death rate is increasing step by step. There are chances of recovering from lung cancer by detecting it early. In any case, because the number of radiologists is limited and they have been working overtime, the increase in image data makes it hard for them to evaluate the images accurately. As a result, many researchers have come up with automated ways to predict the growth of cancer cells using medical imaging methods in a quick and accurate way. Previously, a lot of work was done on computer-aided detection (CADe) and computer-aided diagnosis (CADx) in computed tomography (CT) scan, magnetic resonance imaging (MRI), and X-ray with the goal of effective detection and segmentation of pulmonary nodule, as well as classifying nodules as malignant or benign. But still, no complete comprehensive review that includes all aspects of lung cancer has been done. In this paper, every aspect of lung cancer is discussed in detail, including datasets, image preprocessing, segmentation methods, optimal feature extraction and selection methods, evaluation measurement matrices, and classifiers. Finally, the study looks into several lung cancer-related issues with possible solutions.
Collapse
Affiliation(s)
- M. F. Mridha
- Department of Computer Science and Engineering, American International University Bangladesh, Dhaka 1229, Bangladesh
| | - Akibur Rahman Prodeep
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - A. S. M. Morshedul Hoque
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - Md. Rashedul Islam
- Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh
| | - Aklima Akter Lima
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - Muhammad Mohsin Kabir
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - Md. Abdul Hamid
- Department of Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Yutaka Watanobe
- Department of Computer Science and Engineering, University of Aizu, Aizuwakamatsu 965-8580, Japan
| |
Collapse
|
13
|
Wang H, Tang N, Zhang C, Hao Y, Meng X, Li J. Practice toward standardized performance testing of computer-aided detection algorithms for pulmonary nodule. Front Public Health 2022; 10:1071673. [PMID: 36568775 PMCID: PMC9768365 DOI: 10.3389/fpubh.2022.1071673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 11/21/2022] [Indexed: 12/12/2022] Open
Abstract
This study aimed at implementing practice to build a standardized protocol to test the performance of computer-aided detection (CAD) algorithms for pulmonary nodules. A test dataset was established according to a standardized procedure, including data collection, curation and annotation. Six types of pulmonary nodules were manually annotated as reference standard. Three specific rules to match algorithm output with reference standard were applied and compared. These rules included: (1) "center hit" [whether the center of algorithm highlighted region of interest (ROI) hit the ROI of reference standard]; (2) "center distance" (whether the distance between algorithm highlighted ROI center and reference standard center was below a certain threshold); (3) "area overlap" (whether the overlap between algorithm highlighted ROI and reference standard was above a certain threshold). Performance metrics were calculated and the results were compared among ten algorithms under test (AUTs). The test set currently consisted of CT sequences from 593 patients. Under "center hit" rule, the average recall rate, average precision, and average F1 score of ten algorithms under test were 54.68, 38.19, and 42.39%, respectively. Correspondingly, the results under "center distance" rule were 55.43, 38.69, and 42.96%, and the results under "area overlap" rule were 40.35, 27.75, and 31.13%. Among the six types of pulmonary nodules, the AUTs showed the highest miss rate for pure ground-glass nodules, with an average of 59.32%, followed by pleural nodules and solid nodules, with an average of 49.80 and 42.21%, respectively. The algorithm testing results changed along with specific matching methods adopted in the testing process. The AUTs showed uneven performance on different types of pulmonary nodules. This centralized testing protocol supports the comparison between algorithms with similar intended use, and helps evaluate algorithm performance.
Collapse
Affiliation(s)
- Hao Wang
- Division of Active Medical Device and Medical Optics, Institute for Medical Device Control, National Institutes for Food and Drug Control, Beijing, China
| | - Na Tang
- School of Bioengineering, Chongqing University, Chongqing, China
| | - Chao Zhang
- Division of Active Medical Device and Medical Optics, Institute for Medical Device Control, National Institutes for Food and Drug Control, Beijing, China
| | - Ye Hao
- Division of Active Medical Device and Medical Optics, Institute for Medical Device Control, National Institutes for Food and Drug Control, Beijing, China
| | - Xiangfeng Meng
- Division of Active Medical Device and Medical Optics, Institute for Medical Device Control, National Institutes for Food and Drug Control, Beijing, China,*Correspondence: Xiangfeng Meng
| | - Jiage Li
- Division of Active Medical Device and Medical Optics, Institute for Medical Device Control, National Institutes for Food and Drug Control, Beijing, China,Jiage Li
| |
Collapse
|
14
|
Lin X, Yan Z, Kuang Z, Zhang H, Deng X, Yu L. Fracture R-CNN: An anchor-efficient anti-interference framework for skull fracture detection in CT images. Med Phys 2022; 49:7179-7192. [PMID: 35713606 DOI: 10.1002/mp.15809] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 04/19/2022] [Accepted: 05/16/2022] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Skull fracture, as a common traumatic brain injury, can lead to multiple complications including bleeding, leaking of cerebrospinal fluid, infection, and seizures. Automatic skull fracture detection (SFD) is of great importance, especially in emergency medicine. PURPOSE Existing algorithms for SFD, developed based on hand-crafted features, suffer from low detection accuracy due to poor generalizability to unseen samples. Deploying deep detectors designed for natural images like Faster Region-based Convolutional Neural Network (R-CNN) for SFD can be helpful but are of high redundancy and with nonnegligible false detections due to the cranial suture and skull base interference. Therefore, we, for the first time, propose an anchor-efficient anti-interference deep learning framework named Fracture R-CNN for accurate SFD with low computational cost. METHODS The proposed Fracture R-CNN is developed by incorporating the prior knowledge utilized in clinical diagnosis into the original Faster R-CNN. Specifically, based on the distributions of skull fractures, we first propose an adaptive anchoring region proposal network (AA-RPN) to generate proposals for diverse-scale fractures with low computational complexity. Then, based on the prior knowledge that cranial sutures exist in the junctions of bones and usually contain sclerotic margins, we design an anti-interference head (A-Head) network to eliminate the cranial suture interference for better SFD detection. In addition, to further enhance the anti-interference ability of the proposed A-Head, a difficulty-balanced weighted loss function is proposed to emphasize more on distinguishing the interference areas from the skull base and the cranial sutures during training. RESULTS Experimental results demonstrate that the proposed Fracture R-CNN outperforms the current state-of-the-art (SOTA) deep detectors for SFD with a higher recall and fewer false detections. Compared to Faster R-CNN, the proposed Fracture R-CNN improves the average precision (AP) by 11.74% and the free-response receiver operating characteristic (FROC) score by 11.08%. Through validating on various backbones, we further demonstrate the architecture independence of Fracture R-CNN, making it extendable to other detection applications. CONCLUSIONS As the customized deep learning-based framework for SFD, Fracture R-CNN can effectively overcome the unique challenges in SFD with less computational cost, leading to a better detection performance compared to the SOTA deep detectors. Moreover, we believe the prior knowledge explored for Fracture R-CNN would shed new light on future deep learning approaches for SFD.
Collapse
Affiliation(s)
- Xian Lin
- School of Electronic Information and Communication, Huazhong University of Science and Technology, Wuhan, China
| | - Zengqiang Yan
- School of Electronic Information and Communication, Huazhong University of Science and Technology, Wuhan, China
| | - Zhuo Kuang
- School of Electronic Information and Communication, Huazhong University of Science and Technology, Wuhan, China
| | - Hang Zhang
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Xianbo Deng
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Li Yu
- School of Electronic Information and Communication, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
15
|
You S, Reyes M. Influence of contrast and texture based image modifications on the performance and attention shift of U-Net models for brain tissue segmentation. FRONTIERS IN NEUROIMAGING 2022; 1:1012639. [PMID: 37555149 PMCID: PMC10406260 DOI: 10.3389/fnimg.2022.1012639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 10/12/2022] [Indexed: 08/10/2023]
Abstract
Contrast and texture modifications applied during training or test-time have recently shown promising results to enhance the generalization performance of deep learning segmentation methods in medical image analysis. However, a deeper understanding of this phenomenon has not been investigated. In this study, we investigated this phenomenon using a controlled experimental setting, using datasets from the Human Connectome Project and a large set of simulated MR protocols, in order to mitigate data confounders and investigate possible explanations as to why model performance changes when applying different levels of contrast and texture-based modifications. Our experiments confirm previous findings regarding the improved performance of models subjected to contrast and texture modifications employed during training and/or testing time, but further show the interplay when these operations are combined, as well as the regimes of model improvement/worsening across scanning parameters. Furthermore, our findings demonstrate a spatial attention shift phenomenon of trained models, occurring for different levels of model performance, and varying in relation to the type of applied image modification.
Collapse
Affiliation(s)
- Suhang You
- Medical Image Analysis Group, ARTORG, Graduate School for Cellular and Biomedical Sciences, University of Bern, Bern, Switzerland
| | | |
Collapse
|
16
|
Analysis of the Causes of Solitary Pulmonary Nodule Misdiagnosed as Lung Cancer by Using Artificial Intelligence: A Retrospective Study at a Single Center. Diagnostics (Basel) 2022; 12:diagnostics12092218. [PMID: 36140618 PMCID: PMC9497679 DOI: 10.3390/diagnostics12092218] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2022] [Revised: 09/07/2022] [Accepted: 09/12/2022] [Indexed: 11/25/2022] Open
Abstract
Artificial intelligence (AI) adopting deep learning technology has been widely used in the med-ical imaging domain in recent years. It realized the automatic judgment of benign and malig-nant solitary pulmonary nodules (SPNs) and even replaced the work of doctors to some extent. However, misdiagnoses can occur in certain cases. Only by determining the causes can AI play a larger role. A total of 21 Coronavirus disease 2019 (COVID-19) patients were diagnosed with SPN by CT imaging. Their Clinical data, including general condition, imaging features, AI re-ports, and outcomes were included in this retrospective study. Although they were confirmed COVID-19 by testing reverse transcription-polymerase chain reaction (RT-PCR) with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), their CT imaging data were misjudged by AI to be high-risk nodules for lung cancer. Imaging characteristics included burr sign (76.2%), lobulated sign (61.9%), pleural indentation (42.9%), smooth edges (23.8%), and cavity (14.3%). The accuracy of AI was different from that of radiologists in judging the nature of be-nign SPNs (p < 0.001, κ = 0.036 < 0.4, means the two diagnosis methods poor fit). COVID-19 patients with SPN might have been misdiagnosed using the AI system, suggesting that the AI system needs to be further optimized, especially in the event of a new disease outbreak.
Collapse
|
17
|
LMA-Net: A lesion morphology aware network for medical image segmentation towards breast tumors. Comput Biol Med 2022; 147:105685. [DOI: 10.1016/j.compbiomed.2022.105685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2022] [Revised: 04/20/2022] [Accepted: 05/30/2022] [Indexed: 11/17/2022]
|
18
|
Min Y, Hu L, Wei L, Nie S. Computer-aided detection of pulmonary nodules based on convolutional neural networks: a review. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac568e] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 02/18/2022] [Indexed: 02/08/2023]
Abstract
Abstract
Computer-aided detection (CADe) technology has been proven to increase the detection rate of pulmonary nodules that has important clinical significance for the early diagnosis of lung cancer. In this study, we systematically review the latest techniques in pulmonary nodule CADe based on deep learning models with convolutional neural networks in computed tomography images. First, the brief descriptions and popular architecture of convolutional neural networks are introduced. Second, several common public databases and evaluation metrics are briefly described. Third, state-of-the-art approaches with excellent performances are selected. Subsequently, we combine the clinical diagnostic process and the traditional four steps of pulmonary nodule CADe into two stages, namely, data preprocessing and image analysis. Further, the major optimizations of deep learning models and algorithms are highlighted according to the progressive evaluation effect of each method, and some clinical evidence is added. Finally, various methods are summarized and compared. The innovative or valuable contributions of each method are expected to guide future research directions. The analyzed results show that deep learning-based methods significantly transformed the detection of pulmonary nodules, and the design of these methods can be inspired by clinical imaging diagnostic procedures. Moreover, focusing on the image analysis stage will result in improved returns. In particular, optimal results can be achieved by optimizing the steps of candidate nodule generation and false positive reduction. End-to-end methods, with greater operating speeds and lower computational consumptions, are superior to other methods in CADe of pulmonary nodules.
Collapse
|
19
|
Kocher MR, Chamberlin J, Waltz J, Snoddy M, Stringer N, Stephenson J, Kahn J, Mercer M, Baruah D, Aquino G, Kabakus I, Hoelzer P, Sahbaee P, Schoepf UJ, Burt JR. Tumor burden of lung metastases at initial staging in breast cancer patients detected by artificial intelligence as a prognostic tool for precision medicine. Heliyon 2022; 8:e08962. [PMID: 35243082 PMCID: PMC8873537 DOI: 10.1016/j.heliyon.2022.e08962] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 12/20/2021] [Accepted: 02/11/2022] [Indexed: 12/05/2022] Open
Abstract
Background Determination of the total number and size of all pulmonary metastases on chest CT is time-consuming and as such has been understudied as an independent metric for disease assessment. A novel artificial intelligence (AI) model may allow for automated detection, size determination, and quantification of the number of pulmonary metastases on chest CT. Objective To investigate the utility of a novel AI program applied to initial staging chest CT in breast cancer patients in risk assessment of mortality and survival. Methods Retrospective imaging data from a cohort of 226 subjects with breast cancer was assessed by the novel AI program and the results validated by blinded readers. Mean clinical follow-up was 2.5 years for outcomes including cancer-related death and development of extrapulmonary metastatic disease. AI measurements including total number of pulmonary metastases and maximum nodule size were assessed by Cox-proportional hazard modeling and adjusted survival. Results 752 lung nodules were identified by the AI program, 689 of which were identified in 168 subjects having confirmed lung metastases (Lmet+) and 63 were identified in 58 subjects without confirmed lung metastases (Lmet-). When compared to the reader assessment, AI had a per-patient sensitivity, specificity, PPV and NPV of 0.952, 0.639, 0.878, and 0.830. Mortality in the Lmet + group was four times greater compared to the Lmet-group (p = 0.002). In a multivariate analysis, total lung nodule count by AI had a high correlation with overall mortality (OR 1.11 (range 1.07–1.15), p < 0.001) with an AUC of 0.811 (R2 = 0.226, p < 0.0001). When total lung nodule count and maximum nodule diameter were combined there was an AUC of 0.826 (R2 = 0.243, p < 0.001). Conclusion Automated AI-based detection of lung metastases in breast cancer patients at initial staging chest CT performed well at identifying pulmonary metastases and demonstrated strong correlation between the total number and maximum size of lung metastases with future mortality. Clinical impact As a component of precision medicine, AI-based measurements at the time of initial staging may improve prediction of which breast cancer patients will have negative future outcomes.
Automated detection software can quantify lung metastases on initial staging chest CT in breast cancer patients. AI-detected lung metastases number and max diameter on CT at initial cancer staging were strong predictors of mortality. AI detection and segmentation tool contributes to accurate individualized prognostication in breast cancer patients.
Collapse
Affiliation(s)
- Madison R Kocher
- Medical University of South Carolina, Department of Radiology, 96 Jonathan Lucas Street Suite 210, MSC 323 Charleston, SC 29425, USA
| | - Jordan Chamberlin
- Medical University of South Carolina, Department of Radiology, 96 Jonathan Lucas Street Suite 210, MSC 323 Charleston, SC 29425, USA
| | - Jeffrey Waltz
- Medical University of South Carolina, Department of Radiology, 96 Jonathan Lucas Street Suite 210, MSC 323 Charleston, SC 29425, USA
| | - Madalyn Snoddy
- Medical University of South Carolina, Department of Radiology, 96 Jonathan Lucas Street Suite 210, MSC 323 Charleston, SC 29425, USA
| | - Natalie Stringer
- Medical University of South Carolina, Department of Radiology, 96 Jonathan Lucas Street Suite 210, MSC 323 Charleston, SC 29425, USA
| | - Joseph Stephenson
- Medical University of South Carolina, Department of Radiology, 96 Jonathan Lucas Street Suite 210, MSC 323 Charleston, SC 29425, USA
| | - Jacob Kahn
- Medical University of South Carolina, Department of Radiology, 96 Jonathan Lucas Street Suite 210, MSC 323 Charleston, SC 29425, USA
| | - Megan Mercer
- Medical University of South Carolina, Department of Radiology, 96 Jonathan Lucas Street Suite 210, MSC 323 Charleston, SC 29425, USA
| | - Dhiraj Baruah
- Medical University of South Carolina, Department of Radiology, 96 Jonathan Lucas Street Suite 210, MSC 323 Charleston, SC 29425, USA
| | - Gilberto Aquino
- Medical University of South Carolina, Department of Radiology, 96 Jonathan Lucas Street Suite 210, MSC 323 Charleston, SC 29425, USA
| | - Ismail Kabakus
- Medical University of South Carolina, Department of Radiology, 96 Jonathan Lucas Street Suite 210, MSC 323 Charleston, SC 29425, USA
| | | | | | - U Joseph Schoepf
- Medical University of South Carolina, Department of Radiology, 96 Jonathan Lucas Street Suite 210, MSC 323 Charleston, SC 29425, USA
| | - Jeremy R Burt
- Medical University of South Carolina, Department of Radiology, 96 Jonathan Lucas Street Suite 210, MSC 323 Charleston, SC 29425, USA
| |
Collapse
|
20
|
Terasaki Y, Yokota H, Tashiro K, Maejima T, Takeuchi T, Kurosawa R, Yamauchi S, Takada A, Mukai H, Ohira K, Ota J, Horikoshi T, Mori Y, Uno T, Suyari H. Multidimensional Deep Learning Reduces False-Positives in the Automated Detection of Cerebral Aneurysms on Time-Of-Flight Magnetic Resonance Angiography: A Multi-Center Study. Front Neurol 2022; 12:742126. [PMID: 35115991 PMCID: PMC8805516 DOI: 10.3389/fneur.2021.742126] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 12/07/2021] [Indexed: 11/16/2022] Open
Abstract
Current deep learning-based cerebral aneurysm detection demonstrates high sensitivity, but produces numerous false-positives (FPs), which hampers clinical application of automated detection systems for time-of-flight magnetic resonance angiography. To reduce FPs while maintaining high sensitivity, we developed a multidimensional convolutional neural network (MD-CNN) designed to unite planar and stereoscopic information about aneurysms. This retrospective study enrolled time-of-flight magnetic resonance angiography images of cerebral aneurysms from three institutions from June 2006 to April 2019. In the internal test, 80% of the entire data set was used for model training and 20% for the test, while for the external tests, data from different pairs of the three institutions were used for training and the remaining one for testing. Images containing aneurysms > 15 mm and images without aneurysms were excluded. Three deep learning models [planar information-only (2D-CNN), stereoscopic information-only (3D-CNN), and multidimensional information (MD-CNN)] were trained to classify whether the voxels contained aneurysms, and they were evaluated on each test. The performance of each model was assessed using free-response operating characteristic curves. In total, 732 aneurysms (5.9 ± 2.5 mm) of 559 cases (327, 120, and 112 from institutes A, B, and C; 469 and 263 for 1.5T and 3.0T MRI) were included in this study. In the internal test, the highest sensitivities were 80.4, 87.4, and 82.5%, and the FPs were 6.1, 7.1, and 5.0 FPs/case at a fixed sensitivity of 80% for the 2D-CNN, 3D-CNN, and MD-CNN, respectively. In the external test, the highest sensitivities were 82.1, 86.5, and 89.1%, and 5.9, 7.4, and 4.2 FPs/cases for them, respectively. MD-CNN was a new approach to maintain sensitivity and reduce the FPs simultaneously.
Collapse
Affiliation(s)
- Yuki Terasaki
- Graduate School of Science and Engineering, Chiba University, Chiba, Japan
- Department of EC Platform, ZOZO Technologies, Inc., Tokyo, Japan
| | - Hajime Yokota
- Department of Diagnostic Radiology and Radiation Oncology, Graduate School of Medicine, Chiba University, Chiba, Japan
- *Correspondence: Hajime Yokota
| | - Kohei Tashiro
- Graduate School of Science and Engineering, Chiba University, Chiba, Japan
- Kohei Tashiro
| | - Takuma Maejima
- Department of Radiology, Chiba University Hospital, Chiba, Japan
| | - Takashi Takeuchi
- Department of Radiology, Chiba University Hospital, Chiba, Japan
| | - Ryuna Kurosawa
- Department of Radiology, Chiba University Hospital, Chiba, Japan
| | - Shoma Yamauchi
- Department of Radiology, Chiba University Hospital, Chiba, Japan
| | - Akiyo Takada
- Department of Radiology, Chiba University Hospital, Chiba, Japan
| | - Hiroki Mukai
- Department of Radiology, Chiba University Hospital, Chiba, Japan
| | - Kenji Ohira
- Department of Radiology, Chiba University Hospital, Chiba, Japan
| | - Joji Ota
- Department of Radiology, Chiba University Hospital, Chiba, Japan
| | - Takuro Horikoshi
- Department of Radiology, Chiba University Hospital, Chiba, Japan
| | - Yasukuni Mori
- Graduate School of Engineering, Chiba University, Chiba, Japan
| | - Takashi Uno
- Department of Diagnostic Radiology and Radiation Oncology, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Hiroki Suyari
- Graduate School of Engineering, Chiba University, Chiba, Japan
| |
Collapse
|
21
|
Peng C, Zhang Y, Zheng J, Li B, Shen J, Li M, Liu L, Qiu B, Chen DZ. IMIIN: An inter-modality information interaction network for 3D multi-modal breast tumor segmentation. Comput Med Imaging Graph 2022; 95:102021. [PMID: 34861622 DOI: 10.1016/j.compmedimag.2021.102021] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Revised: 11/02/2021] [Accepted: 11/23/2021] [Indexed: 11/22/2022]
Abstract
Breast tumor segmentation is critical to the diagnosis and treatment of breast cancer. In clinical breast cancer analysis, experts often examine multi-modal images since such images provide abundant complementary information on tumor morphology. Known multi-modal breast tumor segmentation methods extracted 2D tumor features and used information from one modal to assist another. However, these methods were not conducive to fusing multi-modal information efficiently, or may even fuse interference information, due to the lack of effective information interaction management between different modalities. Besides, these methods did not consider the effect of small tumor characteristics on the segmentation results. In this paper, We propose a new inter-modality information interaction network to segment breast tumors in 3D multi-modal MRI. Our network employs a hierarchical structure to extract local information of small tumors, which facilitates precise segmentation of tumor boundaries. Under this structure, we present a 3D tiny object segmentation network based on DenseVoxNet to preserve the boundary details of the segmented tumors (especially for small tumors). Further, we introduce a bi-directional request-supply information interaction module between different modalities so that each modal can request helpful auxiliary information according to its own needs. Experiments on a clinical 3D multi-modal MRI breast tumor dataset show that our new 3D IMIIN is superior to state-of-the-art methods and attains better segmentation results, suggesting that our new method has a good clinical application prospect.
Collapse
Affiliation(s)
- Chengtao Peng
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei 230026, China; Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN 46556, USA
| | - Yue Zhang
- College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China
| | - Jian Zheng
- Suzhou Institute of Biomedical Engineering and Technology, CAS, Suzhou 215163, China.
| | - Bin Li
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei 230026, China
| | - Jun Shen
- Department of Radiology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou 510120, China.
| | - Ming Li
- Suzhou Institute of Biomedical Engineering and Technology, CAS, Suzhou 215163, China
| | - Lei Liu
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei 230026, China
| | - Bensheng Qiu
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei 230026, China
| | - Danny Z Chen
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN 46556, USA
| |
Collapse
|
22
|
Naik A, Edla DR, Dharavath R. Prediction of Malignancy in Lung Nodules Using Combination of Deep, Fractal, and Gray-Level Co-Occurrence Matrix Features. BIG DATA 2021; 9:480-498. [PMID: 34191590 DOI: 10.1089/big.2020.0190] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Accurate detection of malignant tumor on lung computed tomography scans is crucial for early diagnosis of lung cancer and hence the faster recovery of patients. Several deep learning methodologies have been proposed for lung tumor detection, especially the convolution neural network (CNN). However, as CNN may lose some of the spatial relationships between features, we plan to combine texture features such as fractal features and gray-level co-occurrence matrix (GLCM) features along with the CNN features to improve the accuracy of tumor detection. Our framework has two advantages. First it fuses the advantage of CNN features with hand-crafted features such as fractal and GLCM features to gather the spatial information. Second, we reduce the overfitting effect by replacing the softmax layer with the support vector machine classifier. Experiments have shown that texture features such as fractal and GLCM when concatenated with deep features extracted from DenseNet architecture have a better accuracy of 95.42%, sensitivity of 97.49%, and specificity of 93.97%, and a positive predictive value of 95.96% with area under curve score of 0.95.
Collapse
Affiliation(s)
- Amrita Naik
- Department of Computer Science and Engineering, National Institute of Technology, Ponda, Goa, India
| | - Damodar Reddy Edla
- Department of Computer Science and Engineering, National Institute of Technology, Ponda, Goa, India
| | - Ramesh Dharavath
- Department of Computer Science and Engineering, Indian Institute of Technology Dhanbad, Dhanbad, Jharkhand, India
| |
Collapse
|
23
|
Guo Z, Zhao L, Yuan J, Yu H. MSANet Multi-Scale Aggregation Network Integrating Spatial and Channel Information for Lung Nodule Detection. IEEE J Biomed Health Inform 2021; 26:2547-2558. [PMID: 34847048 DOI: 10.1109/jbhi.2021.3131671] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
AbstractImproving the detection accuracy of pulmonary nodules plays an important role in the diagnosis and early treatment of lung cancer. In this paper, a multiscale aggregation network (MSANet), which integrates spatial and channel information, is proposed for 3D pulmonary nodule detection. MSANet is designed to improve the network's ability to extract information and realize multiscale information fusion. First, multiscale aggregation interaction strategies are used to extract multilevel features and avoid feature fusion interference caused by large resolution differences. These strategies can effectively integrate the contextual information of adjacent resolutions and help to detect different sized nodules. Second, the feature extraction module is designed for efficient channel attention and self-calibrated convolutions (ECA-SC) to enhance the interchannel and local spatial information. ECA-SC also recalibrates the features in the feature extraction process, which can realize adaptive learning of feature weights and enhance the information extraction ability of features. Third, the distribution ranking (DR) loss is introduced as the classification loss function to solve the problem of imbalanced data between positive and negative samples. The proposed MSANet is comprehensively compared with other pulmonary nodule detection networks on the LUNA16 dataset, and a CPM score of 0.920 is obtained. The results show that the sensitivity for detecting pulmonary nodules is improved and that the average number of false-positives is effectively reduced. The proposed method has advantages in pulmonary nodule detection and can effectively assist radiologists in pulmonary nodule detection.
Collapse
|
24
|
Naik A, Edla DR. Lung nodule classification using combination of CNN, second and higher order texture features. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-189847] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Lung cancer is the most common cancer throughout the world and identification of malignant tumors at an early stage is needed for diagnosis and treatment of patient thus avoiding the progression to a later stage. In recent times, deep learning architectures such as CNN have shown promising results in effectively identifying malignant tumors in CT scans. In this paper, we combine the CNN features with texture features such as Haralick and Gray level run length matrix features to gather benefits of high level and spatial features extracted from the lung nodules to improve the accuracy of classification. These features are further classified using SVM classifier instead of softmax classifier in order to reduce the overfitting problem. Our model was validated on LUNA dataset and achieved an accuracy of 93.53%, sensitivity of 86.62%, the specificity of 96.55%, and positive predictive value of 94.02%.
Collapse
Affiliation(s)
- Amrita Naik
- Computer Science and Engineering, National Institute of Technology, Ponda, Goa, India
| | - Damodar Reddy Edla
- Computer Science and Engineering, National Institute of Technology, Ponda, Goa, India
| |
Collapse
|
25
|
Hong J, Yun HJ, Park G, Kim S, Ou Y, Vasung L, Rollins CK, Ortinau CM, Takeoka E, Akiyama S, Tarui T, Estroff JA, Grant PE, Lee JM, Im K. Optimal Method for Fetal Brain Age Prediction Using Multiplanar Slices From Structural Magnetic Resonance Imaging. Front Neurosci 2021; 15:714252. [PMID: 34707474 PMCID: PMC8542770 DOI: 10.3389/fnins.2021.714252] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Accepted: 09/08/2021] [Indexed: 11/23/2022] Open
Abstract
The accurate prediction of fetal brain age using magnetic resonance imaging (MRI) may contribute to the identification of brain abnormalities and the risk of adverse developmental outcomes. This study aimed to propose a method for predicting fetal brain age using MRIs from 220 healthy fetuses between 15.9 and 38.7 weeks of gestational age (GA). We built a 2D single-channel convolutional neural network (CNN) with multiplanar MRI slices in different orthogonal planes without correction for interslice motion. In each fetus, multiple age predictions from different slices were generated, and the brain age was obtained using the mode that determined the most frequent value among the multiple predictions from the 2D single-channel CNN. We obtained a mean absolute error (MAE) of 0.125 weeks (0.875 days) between the GA and brain age across the fetuses. The use of multiplanar slices achieved significantly lower prediction error and its variance than the use of a single slice and a single MRI stack. Our 2D single-channel CNN with multiplanar slices yielded a significantly lower stack-wise MAE (0.304 weeks) than the 2D multi-channel (MAE = 0.979, p < 0.001) and 3D (MAE = 1.114, p < 0.001) CNNs. The saliency maps from our method indicated that the anatomical information describing the cortex and ventricles was the primary contributor to brain age prediction. With the application of the proposed method to external MRIs from 21 healthy fetuses, we obtained an MAE of 0.508 weeks. Based on the external MRIs, we found that the stack-wise MAE of the 2D single-channel CNN (0.743 weeks) was significantly lower than those of the 2D multi-channel (1.466 weeks, p < 0.001) and 3D (1.241 weeks, p < 0.001) CNNs. These results demonstrate that our method with multiplanar slices accurately predicts fetal brain age without the need for increased dimensionality or complex MRI preprocessing steps.
Collapse
Affiliation(s)
- Jinwoo Hong
- Department of Electronic Engineering, Hanyang University, Seoul, South Korea
- Fetal Neonatal Neuroimaging and Developmental Science Center, Boston Children’s Hospital and Harvard Medical School, Boston, MA, United States
| | - Hyuk Jin Yun
- Fetal Neonatal Neuroimaging and Developmental Science Center, Boston Children’s Hospital and Harvard Medical School, Boston, MA, United States
- Division of Newborn Medicine, Boston Children’s Hospital and Harvard Medical School, Boston, MA, United States
| | - Gilsoon Park
- USC Mark and Mary Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA, United States
| | - Seonggyu Kim
- Department of Electronic Engineering, Hanyang University, Seoul, South Korea
| | - Yangming Ou
- Fetal Neonatal Neuroimaging and Developmental Science Center, Boston Children’s Hospital and Harvard Medical School, Boston, MA, United States
- Division of Newborn Medicine, Boston Children’s Hospital and Harvard Medical School, Boston, MA, United States
- Department of Radiology, Boston Children’s Hospital and Harvard Medical School, Boston, MA, United States
- Computational Health Informatics Program, Boston Children’s Hospital and Harvard Medical School, Boston, MA, United States
| | - Lana Vasung
- Fetal Neonatal Neuroimaging and Developmental Science Center, Boston Children’s Hospital and Harvard Medical School, Boston, MA, United States
- Division of Newborn Medicine, Boston Children’s Hospital and Harvard Medical School, Boston, MA, United States
| | - Caitlin K. Rollins
- Department of Neurology, Boston Children’s Hospital and Harvard Medical School, Boston, MA, United States
| | - Cynthia M. Ortinau
- Department of Pediatrics, Washington University in St. Louis, St. Louis, MO, United States
| | - Emiko Takeoka
- Mother Infant Research Institute, Tufts Medical Center, Boston, MA, United States
| | - Shizuko Akiyama
- Center for Perinatal and Neonatal Medicine, Tohoku University Hospital, Sendai, Japan
| | - Tomo Tarui
- Mother Infant Research Institute, Tufts Medical Center, Boston, MA, United States
| | - Judy A. Estroff
- Department of Radiology, Boston Children’s Hospital and Harvard Medical School, Boston, MA, United States
| | - Patricia Ellen Grant
- Fetal Neonatal Neuroimaging and Developmental Science Center, Boston Children’s Hospital and Harvard Medical School, Boston, MA, United States
- Division of Newborn Medicine, Boston Children’s Hospital and Harvard Medical School, Boston, MA, United States
- Department of Radiology, Boston Children’s Hospital and Harvard Medical School, Boston, MA, United States
| | - Jong-Min Lee
- Department of Biomedical Engineering, Hanyang University, Seoul, South Korea
| | - Kiho Im
- Fetal Neonatal Neuroimaging and Developmental Science Center, Boston Children’s Hospital and Harvard Medical School, Boston, MA, United States
- Division of Newborn Medicine, Boston Children’s Hospital and Harvard Medical School, Boston, MA, United States
| |
Collapse
|
26
|
Gu Y, Chi J, Liu J, Yang L, Zhang B, Yu D, Zhao Y, Lu X. A survey of computer-aided diagnosis of lung nodules from CT scans using deep learning. Comput Biol Med 2021; 137:104806. [PMID: 34461501 DOI: 10.1016/j.compbiomed.2021.104806] [Citation(s) in RCA: 60] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 08/23/2021] [Accepted: 08/23/2021] [Indexed: 12/17/2022]
Abstract
Lung cancer has one of the highest mortalities of all cancers. According to the National Lung Screening Trial, patients who underwent low-dose computed tomography (CT) scanning once a year for 3 years showed a 20% decline in lung cancer mortality. To further improve the survival rate of lung cancer patients, computer-aided diagnosis (CAD) technology shows great potential. In this paper, we summarize existing CAD approaches applying deep learning to CT scan data for pre-processing, lung segmentation, false positive reduction, lung nodule detection, segmentation, classification and retrieval. Selected papers are drawn from academic journals and conferences up to November 2020. We discuss the development of deep learning, describe several important aspects of lung nodule CAD systems and assess the performance of the selected studies on various datasets, which include LIDC-IDRI, LUNA16, LIDC, DSB2017, NLST, TianChi, and ELCAP. Overall, in the detection studies reviewed, the sensitivity of these techniques is found to range from 61.61% to 98.10%, and the value of the FPs per scan is between 0.125 and 32. In the selected classification studies, the accuracy ranges from 75.01% to 97.58%. The precision of the selected retrieval studies is between 71.43% and 87.29%. Based on performance, deep learning based CAD technologies for detection and classification of pulmonary nodules achieve satisfactory results. However, there are still many challenges and limitations remaining including over-fitting, lack of interpretability and insufficient annotated data. This review helps researchers and radiologists to better understand CAD technology for pulmonary nodule detection, segmentation, classification and retrieval. We summarize the performance of current techniques, consider the challenges, and propose directions for future high-impact research.
Collapse
Affiliation(s)
- Yu Gu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China.
| | - Jingqian Chi
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China.
| | - Jiaqi Liu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Lidong Yang
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Baohua Zhang
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Dahua Yu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Ying Zhao
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Xiaoqi Lu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China; College of Information Engineering, Inner Mongolia University of Technology, Hohhot, 010051, China
| |
Collapse
|
27
|
Farhangi MM, Sahiner B, Petrick N, Pezeshk A. Automatic lung nodule detection in thoracic CT scans using dilated slice-wise convolutions. Med Phys 2021; 48:3741-3751. [PMID: 33932241 DOI: 10.1002/mp.14915] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 04/08/2021] [Accepted: 04/15/2021] [Indexed: 12/24/2022] Open
Abstract
PURPOSE Most state-of-the-art automated medical image analysis methods for volumetric data rely on adaptations of two-dimensional (2D) and three-dimensional (3D) convolutional neural networks (CNNs). In this paper, we develop a novel unified CNN-based model that combines the benefits of 2D and 3D networks for analyzing volumetric medical images. METHODS In our proposed framework, multiscale contextual information is first extracted from 2D slices inside a volume of interest (VOI). This is followed by dilated 1D convolutions across slices to aggregate in-plane features in a slice-wise manner and encode the information in the entire volume. Moreover, we formalize a curriculum learning strategy for a two-stage system (i.e., a system that consists of screening and false positive reduction), where the training samples are presented to the network in a meaningful order to further improve the performance. RESULTS We evaluated the proposed approach by developing a computer-aided detection (CADe) system for lung nodules. Our results on 888 CT exams demonstrate that the proposed approach can effectively analyze volumetric data by achieving a sensitivity of > 0.99 in the screening stage and a sensitivity of > 0.96 at eight false positives per case in the false positive reduction stage. CONCLUSION Our experimental results show that the proposed method provides competitive results compared to state-of-the-art 3D frameworks. In addition, we illustrate the benefits of curriculum learning strategies in two-stage systems that are of common use in medical imaging applications.
Collapse
Affiliation(s)
- M Mehdi Farhangi
- Division of Imaging, Diagnostics, and Software Reliability, CDRH, U.S Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Berkman Sahiner
- Division of Imaging, Diagnostics, and Software Reliability, CDRH, U.S Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Nicholas Petrick
- Division of Imaging, Diagnostics, and Software Reliability, CDRH, U.S Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Aria Pezeshk
- Division of Imaging, Diagnostics, and Software Reliability, CDRH, U.S Food and Drug Administration, Silver Spring, MD, 20993, USA
| |
Collapse
|
28
|
Xiong Z, Jiang Y, Che S, Zhao W, Guo Y, Li G, Liu A, Li Z. Use of CT radiomics to differentiate minimally invasive adenocarcinomas and invasive adenocarcinomas presenting as pure ground-glass nodules larger than 10 mm. Eur J Radiol 2021; 141:109772. [PMID: 34022476 DOI: 10.1016/j.ejrad.2021.109772] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2020] [Revised: 04/12/2021] [Accepted: 05/10/2021] [Indexed: 12/17/2022]
Abstract
PURPOSE This study aimed to develop a model based on radiomics features extracted from computed tomography (CT) images to effectively differentiate between minimally invasive adenocarcinomas (MIAs) and invasive adenocarcinomas (IAs) manifesting as pure ground-glass nodules (pGGNs) larger than 10 mm. METHOD This retrospective study included patients who underwent surgical resection for persistent pGGN between November 2012 and June 2018 and diagnosed with MIAs or IAs. The patients were randomly assigned to the training and test cohorts. The correlation coefficient method and the least absolute shrinkage and selection operator (LASSO) method were applied to select radiomics features useful for constructing a model whose performance was assessed by the area under the receiver operating characteristic curve (AUC-ROC). The radiomics model was compared to a standard CT model (shape, volume and mean CT value of the largest cross-section) and the combined radiomics-standard CT model using univariate and multivariate logistic regression analysis. RESULTS The radiomics model showed better discriminative ability (training AUC, 0.879; test AUC, 0.877) than the standard CT model (training AUC, 0.820; test AUC, 0.828). The combined model (training AUC, 0.879; test AUC, 0.870) did not demonstrate improved performance compared with the radiomics model. Radiomics_score was an independent predictor of invasiveness following multivariate logistic analysis. CONCLUSIONS For pGGNs larger than 10 mm, the radiomics model demonstrated superior diagnostic performance in differentiating between IAs and MIAs, which may be useful to clinicians for diagnosis and treatment selection.
Collapse
Affiliation(s)
- Ziqi Xiong
- Department of Radiology, the First Affiliated Hospital of Dalian Medical University, Dalian, China.
| | - Yining Jiang
- Department of Radiology, the First Affiliated Hospital of Dalian Medical University, Dalian, China.
| | - Siyu Che
- Department of Radiology, the First Affiliated Hospital of Dalian Medical University, Dalian, China.
| | - Wenjing Zhao
- Department of Radiology, the First Affiliated Hospital of Dalian Medical University, Dalian, China.
| | - Yan Guo
- GE Healthcare, Shenyang, China
| | - Guosheng Li
- Department of Pathology, the First Affiliated Hospital of Dalian Medical University, Dalian, China.
| | - Ailian Liu
- Department of Radiology, the First Affiliated Hospital of Dalian Medical University, Dalian, China.
| | - Zhiyong Li
- Department of Radiology, the First Affiliated Hospital of Dalian Medical University, Dalian, China.
| |
Collapse
|
29
|
On the performance of lung nodule detection, segmentation and classification. Comput Med Imaging Graph 2021; 89:101886. [PMID: 33706112 DOI: 10.1016/j.compmedimag.2021.101886] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 01/11/2021] [Accepted: 02/02/2021] [Indexed: 01/10/2023]
Abstract
Computed tomography (CT) screening is an effective way for early detection of lung cancer in order to improve the survival rate of such a deadly disease. For more than two decades, image processing techniques such as nodule detection, segmentation, and classification have been extensively studied to assist physicians in identifying nodules from hundreds of CT slices to measure shapes and HU distributions of nodules automatically and to distinguish their malignancy. Thanks to new parallel computation, multi-layer convolution, nonlinear pooling operation, and the big data learning strategy, recent development of deep-learning algorithms has shown great progress in lung nodule screening and computer-assisted diagnosis (CADx) applications due to their high sensitivity and low false positive rates. This paper presents a survey of state-of-the-art deep-learning-based lung nodule screening and analysis techniques focusing on their performance and clinical applications, aiming to help better understand the current performance, the limitation, and the future trends of lung nodule analysis.
Collapse
|
30
|
Rakocz N, Chiang JN, Nittala MG, Corradetti G, Tiosano L, Velaga S, Thompson M, Hill BL, Sankararaman S, Haines JL, Pericak-Vance MA, Stambolian D, Sadda SR, Halperin E. Automated identification of clinical features from sparsely annotated 3-dimensional medical imaging. NPJ Digit Med 2021; 4:44. [PMID: 33686212 PMCID: PMC7940637 DOI: 10.1038/s41746-021-00411-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 01/26/2021] [Indexed: 12/30/2022] Open
Abstract
One of the core challenges in applying machine learning and artificial intelligence to medicine is the limited availability of annotated medical data. Unlike in other applications of machine learning, where an abundance of labeled data is available, the labeling and annotation of medical data and images require a major effort of manual work by expert clinicians who do not have the time to annotate manually. In this work, we propose a new deep learning technique (SLIVER-net), to predict clinical features from 3-dimensional volumes using a limited number of manually annotated examples. SLIVER-net is based on transfer learning, where we borrow information about the structure and parameters of the network from publicly available large datasets. Since public volume data are scarce, we use 2D images and account for the 3-dimensional structure using a novel deep learning method which tiles the volume scans, and then adds layers that leverage the 3D structure. In order to illustrate its utility, we apply SLIVER-net to predict risk factors for progression of age-related macular degeneration (AMD), a leading cause of blindness, from optical coherence tomography (OCT) volumes acquired from multiple sites. SLIVER-net successfully predicts these factors despite being trained with a relatively small number of annotated volumes (hundreds) and only dozens of positive training examples. Our empirical evaluation demonstrates that SLIVER-net significantly outperforms standard state-of-the-art deep learning techniques used for medical volumes, and its performance is generalizable as it was validated on an external testing set. In a direct comparison with a clinician panel, we find that SLIVER-net also outperforms junior specialists, and identifies AMD progression risk factors similarly to expert retina specialists.
Collapse
Affiliation(s)
- Nadav Rakocz
- Department of Computer Science, University of California, Los Angeles, CA, USA
| | - Jeffrey N Chiang
- Department of Computational Medicine, University of California, Los Angeles, CA, USA
| | | | - Giulia Corradetti
- Doheny Eye Institute, Los Angeles, CA, USA.,Department of Ophthalmology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Liran Tiosano
- Doheny Eye Institute, Los Angeles, CA, USA.,Faculty of Medicine, Hebrew University of Jerusalem, Department of Ophthalmology, Hadassah-Hebrew University Medical Center, Jerusalem, Israel
| | | | - Michael Thompson
- Department of Computer Science, University of California, Los Angeles, CA, USA
| | - Brian L Hill
- Department of Computer Science, University of California, Los Angeles, CA, USA
| | - Sriram Sankararaman
- Department of Computer Science, University of California, Los Angeles, CA, USA.,Department of Computational Medicine, University of California, Los Angeles, CA, USA.,Department of Human Genetics, University of California, Los Angeles, CA, USA
| | - Jonathan L Haines
- Department of Population & Quantitative Health Sciences, Case Western Reserve University, Cleveland, OH, USA
| | - Margaret A Pericak-Vance
- John P. Hussman Institute for Human Genomics, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Dwight Stambolian
- Department of Ophthalmology, University of Pennsylvania, Perelman School of Medicine, Philadelphia, PA, USA
| | - Srinivas R Sadda
- Doheny Eye Institute, Los Angeles, CA, USA.,Department of Ophthalmology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Eran Halperin
- Department of Computer Science, University of California, Los Angeles, CA, USA. .,Department of Computational Medicine, University of California, Los Angeles, CA, USA. .,Faculty of Medicine, Hebrew University of Jerusalem, Department of Ophthalmology, Hadassah-Hebrew University Medical Center, Jerusalem, Israel. .,Department of Anesthesiology, University of California, Los Angeles, CA, USA. .,Institute of Precision Health, University of California, Los Angeles, CA, USA.
| |
Collapse
|
31
|
A novel technology to integrate imaging and clinical markers for non-invasive diagnosis of lung cancer. Sci Rep 2021; 11:4597. [PMID: 33633213 PMCID: PMC7907202 DOI: 10.1038/s41598-021-83907-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Accepted: 02/09/2021] [Indexed: 12/17/2022] Open
Abstract
This study presents a non-invasive, automated, clinical diagnostic system for early diagnosis of lung cancer that integrates imaging data from a single computed tomography scan and breath bio-markers obtained from a single exhaled breath to quickly and accurately classify lung nodules. CT imaging and breath volatile organic compounds data were collected from 47 patients. Spherical Harmonics-based shape features to quantify the shape complexity of the pulmonary nodules, 7th-Order Markov Gibbs Random Field based appearance model to describe the spatial non-homogeneities in the pulmonary nodule, and volumetric features (size) of pulmonary nodules were calculated from CT images. 27 VOCs in exhaled breath were captured by a micro-reactor approach and quantied using mass spectrometry. CT and breath markers were input into a deep-learning autoencoder classifier with a leave-one-subject-out cross validation for nodule classification. To mitigate the limitation of a small sample size and validate the methodology for individual markers, retrospective CT scans from 467 patients with 727 pulmonary nodules, and breath samples from 504 patients were analyzed. The CAD system achieved 97.8% accuracy, 97.3% sensitivity, 100% specificity, and 99.1% area under curve in classifying pulmonary nodules.
Collapse
|
32
|
Wang B, Si S, Zhao H, Zhu H, Dou S. False positive reduction in pulmonary nodule classification using 3D texture and edge feature in CT images. Technol Health Care 2021; 29:1071-1088. [PMID: 30664518 DOI: 10.3233/thc-181565] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND Pulmonary nodule detection can significantly influence the early diagnosis of lung cancer while is confused by false positives. OBJECTIVE In this study, we focus on the false positive reduction and present a method for accurate and rapid detection of pulmonary nodule from suspective regions with 3D texture and edge feature. METHODS This work mainly consists of four modules. Firstly, small pulmonary nodule candidates are preprocessed by a reconstruction approach for enhancing 3D image feature. Secondly, a texture feature descriptor is proposed, named cross-scale local binary patterns (CS-LBP), to extract spatial texture information. Thirdly, we design a 3D edge feature descriptor named orthogonal edge orientation histogram (ORT-EOH) to obtain spatial edge information. Finally, hierarchical support vector machines (H-SVMs) is used to classify suspective regions as either nodules or non-nodules with joint CS-LBP and ORT-EOH feature vector. RESULTS For the solitary solid nodule, ground-glass opacity, juxta-vascular nodule and juxta-pleural nodule, average sensitivity, average specificity and average accuracy of our method are 95.69%, 96.95% and 96.04%, respectively. The elapsed time in training and testing stage are 321.76 s and 5.69 s. CONCLUSIONS Our proposed method has the best performance compared with other state-of-the-art methods and is shown the improved precision of pulmonary nodule detection with computationaly low cost.
Collapse
|
33
|
Wu Z, Ge R, Shi G, Zhang L, Chen Y, Luo L, Cao Y, Yu H. MD-NDNet: a multi-dimensional convolutional neural network for false-positive reduction in pulmonary nodule detection. Phys Med Biol 2020; 65:235053. [PMID: 32698172 DOI: 10.1088/1361-6560/aba87c] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
Pulmonary nodule false-positive reduction is of great significance for automated nodule detection in clinical diagnosis of low-dose computed tomography (LDCT) lung cancer screening. Due to individual intra-nodule variations and visual similarities between true nodules and false positives as soft tissues in LDCT images, the current clinical practices remain subject to shortcomings of potential high-risk and time-consumption issues. In this paper, we propose a multi-dimensional nodule detection network (MD-NDNet) for automatic nodule false-positive reduction using deep convolutional neural network (DCNNs). The underlying method collaboratively integrates multi-dimensional nodule information to complementarily and comprehensively extract nodule inter-plane volumetric correlation features using three-dimensional CNNs (3D CNNs) and spatial nodule correlation features from sagittal, coronal, and axial planes using two-dimensional CNNs (2D CNNs) with attention module. To incorporate different sizes and shapes of nodule candidates, a multi-scale ensemble strategy is employed for probability aggregation with weights. The proposed method is evaluated on the LUNA16 challenge dataset in ISBI 2016 with ten-fold cross-validation. Experiment results show that the proposed framework achieves classification performance with a CPM score of 0.9008. All of these indicate that our method enables an efficient, accurate and reliable pulmonary nodule detection for clinical diagnosis.
Collapse
Affiliation(s)
- Zhan Wu
- School of Cyberspace Security, Southeast University, Nanjing, Jiangsu, China. Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA, United States of America
| | | | | | | | | | | | | | | |
Collapse
|
34
|
Hong J, Yun HJ, Park G, Kim S, Laurentys CT, Siqueira LC, Tarui T, Rollins CK, Ortinau CM, Grant PE, Lee JM, Im K. Fetal Cortical Plate Segmentation Using Fully Convolutional Networks With Multiple Plane Aggregation. Front Neurosci 2020; 14:591683. [PMID: 33343286 PMCID: PMC7738480 DOI: 10.3389/fnins.2020.591683] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2020] [Accepted: 11/04/2020] [Indexed: 01/14/2023] Open
Abstract
Fetal magnetic resonance imaging (MRI) has the potential to advance our understanding of human brain development by providing quantitative information of cortical plate (CP) development in vivo. However, for a reliable quantitative analysis of cortical volume and sulcal folding, accurate and automated segmentation of the CP is crucial. In this study, we propose a fully convolutional neural network for the automatic segmentation of the CP. We developed a novel hybrid loss function to improve the segmentation accuracy and adopted multi-view (axial, coronal, and sagittal) aggregation with a test-time augmentation method to reduce errors using three-dimensional (3D) information and multiple predictions. We evaluated our proposed method using the ten-fold cross-validation of 52 fetal brain MR images (22.9-31.4 weeks of gestation). The proposed method obtained Dice coefficients of 0.907 ± 0.027 and 0.906 ± 0.031 as well as a mean surface distance error of 0.182 ± 0.058 mm and 0.185 ± 0.069 mm for the left and right, respectively. In addition, the left and right CP volumes, surface area, and global mean curvature generated by automatic segmentation showed a high correlation with the values generated by manual segmentation (R 2 > 0.941). We also demonstrated that the proposed hybrid loss function and the combination of multi-view aggregation and test-time augmentation significantly improved the CP segmentation accuracy. Our proposed segmentation method will be useful for the automatic and reliable quantification of the cortical structure in the fetal brain.
Collapse
Affiliation(s)
- Jinwoo Hong
- Department of Electronic Engineering, Hanyang University, Seoul, South Korea
- Fetal-Neonatal Neuroimaging and Developmental Science Center, Boston Children’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Hyuk Jin Yun
- Fetal-Neonatal Neuroimaging and Developmental Science Center, Boston Children’s Hospital, Harvard Medical School, Boston, MA, United States
- Division of Newborn Medicine, Boston Children’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Gilsoon Park
- Department of Biomedical Engineering, Hanyang University, Seoul, South Korea
| | - Seonggyu Kim
- Department of Electronic Engineering, Hanyang University, Seoul, South Korea
| | - Cynthia T. Laurentys
- Fetal-Neonatal Neuroimaging and Developmental Science Center, Boston Children’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Leticia C. Siqueira
- Fetal-Neonatal Neuroimaging and Developmental Science Center, Boston Children’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Tomo Tarui
- Mother Infant Research Institute, Tufts Medical Center, Tufts University School of Medicine, Boston, MA, United States
- Department of Pediatrics, Tufts Medical Center, Tufts University School of Medicine, Boston, MA, United States
| | - Caitlin K. Rollins
- Department of Neurology, Boston Children’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Cynthia M. Ortinau
- Department of Pediatrics, Washington University in St. Louis, St. Louis, MO, United States
| | - P. Ellen Grant
- Fetal-Neonatal Neuroimaging and Developmental Science Center, Boston Children’s Hospital, Harvard Medical School, Boston, MA, United States
- Division of Newborn Medicine, Boston Children’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Jong-Min Lee
- Department of Biomedical Engineering, Hanyang University, Seoul, South Korea
| | - Kiho Im
- Fetal-Neonatal Neuroimaging and Developmental Science Center, Boston Children’s Hospital, Harvard Medical School, Boston, MA, United States
- Division of Newborn Medicine, Boston Children’s Hospital, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
35
|
Lu X, Gu Y, Yang L, Zhang B, Zhao Y, Yu D, Zhao J, Gao L, Zhou T, Liu Y, Zhang W. Multi-level 3D Densenets for False-positive Reduction in Lung Nodule Detection Based on Chest Computed Tomography. Curr Med Imaging 2020; 16:1004-1021. [PMID: 33081662 DOI: 10.2174/1573405615666191113122840] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2019] [Revised: 10/11/2019] [Accepted: 10/19/2019] [Indexed: 12/31/2022]
Abstract
OBJECTIVE False-positive nodule reduction is a crucial part of a computer-aided detection (CADe) system, which assists radiologists in accurate lung nodule detection. In this research, a novel scheme using multi-level 3D DenseNet framework is proposed to implement false-positive nodule reduction task. METHODS Multi-level 3D DenseNet models were extended to differentiate lung nodules from falsepositive nodules. First, different models were fed with 3D cubes with different sizes for encoding multi-level contextual information to meet the challenges of the large variations of lung nodules. In addition, image rotation and flipping were utilized to upsample positive samples which consisted of a positive sample set. Furthermore, the 3D DenseNets were designed to keep low-level information of nodules, as densely connected structures in DenseNet can reuse features of lung nodules and then boost feature propagation. Finally, the optimal weighted linear combination of all model scores obtained the best classification result in this research. RESULTS The proposed method was evaluated with LUNA16 dataset which contained 888 thin-slice CT scans. The performance was validated via 10-fold cross-validation. Both the Free-response Receiver Operating Characteristic (FROC) curve and the Competition Performance Metric (CPM) score show that the proposed scheme can achieve a satisfactory detection performance in the falsepositive reduction track of the LUNA16 challenge. CONCLUSION The result shows that the proposed scheme can be significant for false-positive nodule reduction task.
Collapse
Affiliation(s)
- Xiaoqi Lu
- College of Information Engineering, Inner Mongolia University of Technology, Hohhot, 010051, China,Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China,School of Computer Engineering and Science, Shanghai University, Shanghai, 200444, China
| | - Yu Gu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China,School of Computer Engineering and Science, Shanghai University, Shanghai, 200444, China
| | - Lidong Yang
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Baohua Zhang
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Ying Zhao
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Dahua Yu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Jianfeng Zhao
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Lixin Gao
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China,School of Foreign Languages, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Tao Zhou
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Yang Liu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Wei Zhang
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| |
Collapse
|
36
|
Yanagawa M, Niioka H, Kusumoto M, Awai K, Tsubamoto M, Satoh Y, Miyata T, Yoshida Y, Kikuchi N, Hata A, Yamasaki S, Kido S, Nagahara H, Miyake J, Tomiyama N. Diagnostic performance for pulmonary adenocarcinoma on CT: comparison of radiologists with and without three-dimensional convolutional neural network. Eur Radiol 2020; 31:1978-1986. [PMID: 33011879 DOI: 10.1007/s00330-020-07339-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Revised: 09/02/2020] [Accepted: 09/22/2020] [Indexed: 12/17/2022]
Abstract
OBJECTIVES To compare diagnostic performance for pulmonary invasive adenocarcinoma among radiologists with and without three-dimensional convolutional neural network (3D-CNN). METHODS Enrolled were 285 patients with adenocarcinoma in situ (AIS, n = 75), minimally invasive adenocarcinoma (MIA, n = 58), and invasive adenocarcinoma (IVA, n = 152). A 3D-CNN model was constructed with seven convolution-pooling and two max-pooling layers and fully connected layers, in which batch normalization, residual connection, and global average pooling were used. Only the flipping process was performed for augmentation. The output layer comprised two nodes for two conditions (AIS/MIA and IVA) according to prognosis. Diagnostic performance of the 3D-CNN model in 285 patients was calculated using nested 10-fold cross-validation. In 90 of 285 patients, results from each radiologist (R1, R2, and R3; with 9, 14, and 26 years of experience, respectively) with and without the 3D-CNN model were statistically compared. RESULTS Without the 3D-CNN model, accuracy, sensitivity, and specificity of the radiologists were as follows: R1, 70.0%, 52.1%, and 90.5%; R2, 72.2%, 75%, and 69%; and R3, 74.4%, 89.6%, and 57.1%, respectively. With the 3D-CNN model, accuracy, sensitivity, and specificity of the radiologists were as follows: R1, 72.2%, 77.1%, and 66.7%; R2, 74.4%, 85.4%, and 61.9%; and R3, 74.4%, 93.8%, and 52.4%, respectively. Diagnostic performance of each radiologist with and without the 3D-CNN model had no significant difference (p > 0.88), but the accuracy of R1 and R2 was significantly higher with than without the 3D-CNN model (p < 0.01). CONCLUSIONS The 3D-CNN model can support a less-experienced radiologist to improve diagnostic accuracy for pulmonary invasive adenocarcinoma without deteriorating any diagnostic performances. KEY POINTS • The 3D-CNN model is a non-invasive method for predicting pulmonary invasive adenocarcinoma in CT images with high sensitivity. • Diagnostic accuracy by a less-experienced radiologist was better with the 3D-CNN model than without the model.
Collapse
Affiliation(s)
- Masahiro Yanagawa
- Department of Radiology, Graduate School of Medicine, Osaka University, 2-2 Yamadaoka, Suita City, Osaka, 565-0871, Japan.
| | - Hirohiko Niioka
- Institute for Datability Science, Osaka University, 2-8 Yamadaoka, Suita City, Osaka, 565-0871, Japan
| | - Masahiko Kusumoto
- Department of Diagnostic Radiology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, Japan
| | - Kazuo Awai
- Department of Diagnostic Radiology, Graduate School of Biomedical and Health Science, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan
| | - Mitsuko Tsubamoto
- Department of Future Diagnostic Radiology, Graduate School of Medicine, Osaka University, 2-2 Yamadaoka, Suita City, Osaka, 565-0871, Japan
| | - Yukihisa Satoh
- Department of Radiology, Graduate School of Medicine, Osaka University, 2-2 Yamadaoka, Suita City, Osaka, 565-0871, Japan
| | - Tomo Miyata
- Department of Radiology, Graduate School of Medicine, Osaka University, 2-2 Yamadaoka, Suita City, Osaka, 565-0871, Japan
| | - Yuriko Yoshida
- Department of Radiology, Graduate School of Medicine, Osaka University, 2-2 Yamadaoka, Suita City, Osaka, 565-0871, Japan
| | - Noriko Kikuchi
- Department of Radiology, Graduate School of Medicine, Osaka University, 2-2 Yamadaoka, Suita City, Osaka, 565-0871, Japan
| | - Akinori Hata
- Department of Radiology, Graduate School of Medicine, Osaka University, 2-2 Yamadaoka, Suita City, Osaka, 565-0871, Japan
| | - Shohei Yamasaki
- Graduate School of Information Science and Technology, Osaka University, 1-5, Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Shoji Kido
- Department of Artificial Intelligence Diagnostic Radiology, Graduate School of Medicine, Osaka University, 2-2 Yamadaoka, Suita City, Osaka, 565-0871, Japan
| | - Hajime Nagahara
- Institute for Datability Science, Osaka University, 2-8 Yamadaoka, Suita City, Osaka, 565-0871, Japan
| | - Jun Miyake
- Graduate School of Engineering, Osaka University, 2-8 Yamadaoka, Suita City, Osaka, 565-0871, Japan
| | - Noriyuki Tomiyama
- Department of Radiology, Graduate School of Medicine, Osaka University, 2-2 Yamadaoka, Suita City, Osaka, 565-0871, Japan
| |
Collapse
|
37
|
Liu C, Hu SC, Wang C, Lafata K, Yin FF. Automatic detection of pulmonary nodules on CT images with YOLOv3: development and evaluation using simulated and patient data. Quant Imaging Med Surg 2020; 10:1917-1929. [PMID: 33014725 PMCID: PMC7495314 DOI: 10.21037/qims-19-883] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2019] [Accepted: 06/29/2020] [Indexed: 12/15/2022]
Abstract
BACKGROUND To develop a high-efficiency pulmonary nodule computer-aided detection (CAD) method for localization and diameter estimation. METHODS The developed CAD method centralizes a novel convolutional neural network (CNN) algorithm, You Only Look Once (YOLO) v3, as a deep learning approach. This method is featured by two distinct properties: (I) an automatic multi-scale feature extractor for nodule feature screening, and (II) a feature-based bounding box generator for nodule localization and diameter estimation. Two independent studies were performed to train and evaluate this CAD method. One study comprised of a computer simulation that utilized computer-based ground truth. In this study, 300 CT scans were simulated by Cardiac-torso (XCAT) digital phantom. Spherical nodules of various sizes (i.e., 3-10 mm in diameter) were randomly implanted within the lung region of the simulated images-the second study utilized human-based ground truth in patients. The CAD method was developed by CT scans sourced from the LIDC-IDRI database. CT scans with slice thickness above 2.5 mm were excluded, leaving 888 CT images for analysis. A 10-fold cross-validation procedure was implemented in both studies to evaluate network hyper-parameterization and generalization. The overall accuracy of the CAD method was evaluated by the detection sensitivities, in response to average false positives (FPs) per image. In the patient study, the detection accuracy was further compared against 9 recently published CAD studies using free-receiver response operating characteristic (FROC) curve analysis. Localization and diameter estimation accuracies were quantified by the mean and standard error between the predicted value and ground truth. RESULTS The average results among the 10 cross-validation folds in both studies demonstrated the CAD method achieved high detection accuracy. The sensitivity was 99.3% (FPs =1), and improved to 100% (FPs =4) in the simulation study. The corresponding sensitivities were 90.0% and 95.4% in the patient study, displaying superiority over several conventional and CNN-based lung nodule CAD methods in the FROC curve analysis. Nodule localization and diameter estimation errors were less than 1 mm in both studies. The developed CAD method achieved high computational efficiency: it yields nodule-specific quantitative values (i.e., number, existence confidence, central coordinates, and diameter) within 0.1 s for 2D CT slice inputs. CONCLUSIONS The reported results suggest that the developed lung pulmonary nodule CAD method possesses high accuracies of nodule localization and diameter estimation. The high computational efficiency enables its potential clinical application in the future.
Collapse
Affiliation(s)
- Chenyang Liu
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, China
| | - Shen-Chiang Hu
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, China
| | - Chunhao Wang
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Kyle Lafata
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Fang-Fang Yin
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, China
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| |
Collapse
|
38
|
Yu J, Yang B, Wang J, Leader J, Wilson D, Pu J. 2D CNN versus 3D CNN for false-positive reduction in lung cancer screening. J Med Imaging (Bellingham) 2020; 7:051202. [PMID: 33062802 PMCID: PMC7550796 DOI: 10.1117/1.jmi.7.5.051202] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Accepted: 09/28/2020] [Indexed: 11/14/2022] Open
Abstract
Purpose: To clarify whether and to what extent three-dimensional (3D) convolutional neural network (CNN) is superior to 2D CNN when applied to reduce false-positive nodule detections in the scenario of low-dose computed tomography (CT) lung cancer screening. Approach: We established a dataset consisting of 1600 chest CT examinations acquired on different subjects from various sources. There were in total 18,280 candidate nodules in these CT examinations, among which 9185 were nodules and 9095 were not nodules. For each candidate nodule, we extracted a number of cubic subvolumes with a dimension of 72 × 72 × 72 mm 3 by rotating the CT examinations randomly for 25 times prior to the extraction of the axis-aligned subvolumes. These subvolumes were split into three groups in a ratio of 8 ∶ 1 ∶ 1 for training, validation, and independent testing purposes. We developed a multiscale CNN architecture and implemented its 2D and 3D versions to classify pulmonary nodules into two categories, namely true positive and false positive. The performance of the 2D/3D-CNN classification schemes was evaluated using the area under the receiver operating characteristic curves (AUC). The p -values and the 95% confidence intervals (CI) were calculated. Results: The AUC for the optimal 2D-CNN model is 0.9307 (95% CI: 0.9285 to 0.9330) with a sensitivity of 92.70% and a specificity of 76.21%. The 3D-CNN model with the best performance had an AUC of 0.9541 (95% CI: 0.9495 to 0.9583) with a sensitivity of 89.98% and a specificity of 87.30%. The developed multiscale CNN architecture had a better performance than the vanilla architecture did. Conclusions: The 3D-CNN model has a better performance in false-positive reduction compared with its 2D counterpart; however, the improvement is relatively limited and demands more computational resources for training purposes.
Collapse
Affiliation(s)
- Juezhao Yu
- University of Pittsburgh, Departments of Radiology and Bioengineering, Pittsburgh, Pennsylvania, United States
| | - Bohan Yang
- University of Pittsburgh, Departments of Radiology and Bioengineering, Pittsburgh, Pennsylvania, United States
| | - Jing Wang
- University of Pittsburgh, Departments of Radiology and Bioengineering, Pittsburgh, Pennsylvania, United States
| | - Joseph Leader
- University of Pittsburgh, Departments of Radiology and Bioengineering, Pittsburgh, Pennsylvania, United States
| | - David Wilson
- University of Pittsburgh, Department of Medicine, Pittsburgh, Pennsylvania, United States
| | - Jiantao Pu
- University of Pittsburgh, Departments of Radiology and Bioengineering, Pittsburgh, Pennsylvania, United States
| |
Collapse
|
39
|
Cui S, Ming S, Lin Y, Chen F, Shen Q, Li H, Chen G, Gong X, Wang H. Development and clinical application of deep learning model for lung nodules screening on CT images. Sci Rep 2020; 10:13657. [PMID: 32788705 PMCID: PMC7423892 DOI: 10.1038/s41598-020-70629-3] [Citation(s) in RCA: 50] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2020] [Accepted: 07/29/2020] [Indexed: 12/11/2022] Open
Abstract
Lung cancer screening based on low-dose CT (LDCT) has now been widely applied because of its effectiveness and ease of performance. Radiologists who evaluate a large LDCT screening images face enormous challenges, including mechanical repetition and boring work, the easy omission of small nodules, lack of consistent criteria, etc. It requires an efficient method for helping radiologists improve nodule detection accuracy with efficiency and cost-effectiveness. Many novel deep neural network-based systems have demonstrated the potential for use in the proposed technique to detect lung nodules. However, the effectiveness of clinical practice has not been fully recognized or proven. Therefore, the aim of this study to develop and assess a deep learning (DL) algorithm in identifying pulmonary nodules (PNs) on LDCT and investigate the prevalence of the PNs in China. Radiologists and algorithm performance were assessed using the FROC score, ROC-AUC, and average time consumption. Agreement between the reference standard and the DL algorithm in detecting positive nodules was assessed per-study by Bland-Altman analysis. The Lung Nodule Analysis (LUNA) public database was used as the external test. The prevalence of NCPNs was investigated as well as other detailed information regarding the number of pulmonary nodules, their location, and characteristics, as interpreted by two radiologists.
Collapse
Affiliation(s)
- Sijia Cui
- Department of Radiology, Zhejiang Provincial People's Hospital, Affiliated People's Hospital of Hangzhou Medical College, Hangzhou, 310013, China
- The Second Clinical Medical College, Zhejiang Chinese Medical University, Hangzhou, 310053, China
| | - Shuai Ming
- Department of Radiology, Zhejiang Provincial People's Hospital, Affiliated People's Hospital of Hangzhou Medical College, Hangzhou, 310013, China
| | - Yi Lin
- Department of Radiology, Zhejiang Provincial People's Hospital, Affiliated People's Hospital of Hangzhou Medical College, Hangzhou, 310013, China
| | - Fanghong Chen
- Department of Radiology, Zhejiang Provincial People's Hospital, Affiliated People's Hospital of Hangzhou Medical College, Hangzhou, 310013, China
| | - Qiang Shen
- Department of Radiology, Zhejiang Provincial People's Hospital, Affiliated People's Hospital of Hangzhou Medical College, Hangzhou, 310013, China
| | - Hui Li
- Hangzhou Yitu Healthcare Technology Co., Ltd, Hangzhou, 310000, China
| | - Gen Chen
- Hangzhou Yitu Healthcare Technology Co., Ltd, Hangzhou, 310000, China
| | - Xiangyang Gong
- Department of Radiology, Zhejiang Provincial People's Hospital, Affiliated People's Hospital of Hangzhou Medical College, Hangzhou, 310013, China.
- Institute of Artificial Intelligence and Remote Imaging, Hangzhou Medical College, Hangzhou, 310000, China.
| | - Haochu Wang
- Department of Radiology, Zhejiang Provincial People's Hospital, Affiliated People's Hospital of Hangzhou Medical College, Hangzhou, 310013, China.
| |
Collapse
|
40
|
Zhang C, Wu S, Lu Z, Shen Y, Wang J, Huang P, Lou J, Liu C, Xing L, Zhang J, Xue J, Li D. Hybrid adversarial‐discriminative network for leukocyte classification in leukemia. Med Phys 2020; 47:3732-3744. [DOI: 10.1002/mp.14144] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Revised: 02/05/2020] [Accepted: 03/06/2020] [Indexed: 11/12/2022] Open
Affiliation(s)
- Chuanhao Zhang
- Shandong Key Laboratory of Medical Physics and Image Processing Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine School of Physics and Electronics Shandong Normal University Jinan Shandong 250358China
| | - Shangshang Wu
- Shandong Key Laboratory of Medical Physics and Image Processing Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine School of Physics and Electronics Shandong Normal University Jinan Shandong 250358China
| | - Zhiming Lu
- Department of Clinical Laboratory Shandong Provincial Hospital Affiliated to Shandong University 250014China
| | - Yajuan Shen
- Department of Clinical Laboratory Shandong Provincial Hospital Affiliated to Shandong University 250014China
| | - Jing Wang
- Department of Clinical Laboratory Shandong Provincial Hospital Affiliated to Shandong University 250014China
| | - Pu Huang
- Shandong Key Laboratory of Medical Physics and Image Processing Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine School of Physics and Electronics Shandong Normal University Jinan Shandong 250358China
| | - Jingjiao Lou
- Shandong Key Laboratory of Medical Physics and Image Processing Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine School of Physics and Electronics Shandong Normal University Jinan Shandong 250358China
| | - Cong Liu
- Shandong Key Laboratory of Medical Physics and Image Processing Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine School of Physics and Electronics Shandong Normal University Jinan Shandong 250358China
| | - Lei Xing
- Department of Radiation Oncology Stanford University School of Medicine Stanford CA 94304USA
| | - Jian Zhang
- Department of Clinical Laboratory Shandong Provincial Hospital Affiliated to Shandong University 250014China
| | - Jie Xue
- Business School Shandong Normal University 250014China
| | - Dengwang Li
- Shandong Key Laboratory of Medical Physics and Image Processing Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine School of Physics and Electronics Shandong Normal University Jinan Shandong 250358China
| |
Collapse
|
41
|
Xu YM, Zhang T, Xu H, Qi L, Zhang W, Zhang YD, Gao DS, Yuan M, Yu TF. Deep Learning in CT Images: Automated Pulmonary Nodule Detection for Subsequent Management Using Convolutional Neural Network. Cancer Manag Res 2020; 12:2979-2992. [PMID: 32425607 PMCID: PMC7196793 DOI: 10.2147/cmar.s239927] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Accepted: 04/05/2020] [Indexed: 12/26/2022] Open
Abstract
PURPOSE The purpose of this study is to compare the detection performance of the 3-dimensional convolutional neural network (3D CNN)-based computer-aided detection (CAD) models with radiologists of different levels of experience in detecting pulmonary nodules on thin-section computed tomography (CT). PATIENTS AND METHODS We retrospectively reviewed 1109 consecutive patients who underwent follow-up thin-section CT at our institution. The 3D CNN model for nodule detection was re-trained and complemented by expert augmentation. The annotations of a consensus panel consisting of two expert radiologists determined the ground truth. The detection performance of the re-trained CAD model and three other radiologists at different levels of experience were tested using a free-response receiver operating characteristic (FROC) analysis in the test group. RESULTS The detection performance of the re-trained CAD model was significantly better than that of the pre-trained network (sensitivity: 93.09% vs 38.44%). The re-trained CAD model had a significantly better detection performance than radiologists (average sensitivity: 93.09% vs 50.22%), without significantly increasing the number of false positives per scan (1.64 vs 0.68). In the training set, 922 nodules less than 3 mm in size in 211 patients at high risk were recommended for follow-up CT according to the Fleischner Society Guidelines. Fifteen of 101 solid nodules were confirmed to be lung cancer. CONCLUSION The re-trained 3D CNN-based CAD model, complemented by expert augmentation, was an accurate and efficient tool in identifying incidental pulmonary nodules for subsequent management.
Collapse
Affiliation(s)
- Yi-Ming Xu
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, People’s Republic of China
| | - Teng Zhang
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, People’s Republic of China
| | - Hai Xu
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, People’s Republic of China
| | - Liang Qi
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, People’s Republic of China
| | - Wei Zhang
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, People’s Republic of China
| | - Yu-Dong Zhang
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, People’s Republic of China
| | - Da-Shan Gao
- 12sigma Technologies, San Diego, California, USA
| | - Mei Yuan
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, People’s Republic of China
| | - Tong-Fu Yu
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, People’s Republic of China
| |
Collapse
|
42
|
Farhangi MM, Petrick N, Sahiner B, Frigui H, Amini AA, Pezeshk A. Recurrent attention network for false positive reduction in the detection of pulmonary nodules in thoracic CT scans. Med Phys 2020; 47:2150-2160. [DOI: 10.1002/mp.14076] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2019] [Revised: 12/13/2019] [Accepted: 01/13/2020] [Indexed: 12/19/2022] Open
Affiliation(s)
- M. Mehdi Farhangi
- Division of Imaging, Diagnostics, and Software Reliability (DIDSR) OSEL, CDRH, FDA Silver Spring MD 20993USA
| | - Nicholas Petrick
- Division of Imaging, Diagnostics, and Software Reliability (DIDSR) OSEL, CDRH, FDA Silver Spring MD 20993USA
| | - Berkman Sahiner
- Division of Imaging, Diagnostics, and Software Reliability (DIDSR) OSEL, CDRH, FDA Silver Spring MD 20993USA
| | - Hichem Frigui
- Multimedia Laboratory University of Louisville Louisville KY 40292USA
| | - Amir A. Amini
- Medical Imaging Laboratory University of Louisville Louisville KY 40292USA
| | - Aria Pezeshk
- Division of Imaging, Diagnostics, and Software Reliability (DIDSR) OSEL, CDRH, FDA Silver Spring MD 20993USA
| |
Collapse
|
43
|
Wang Y, Wu B, Zhang N, Liu J, Ren F, Zhao L. Research progress of computer aided diagnosis system for pulmonary nodules in CT images. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2020; 28:1-16. [PMID: 31815727 DOI: 10.3233/xst-190581] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Since CAD (Computer Aided Diagnosis) system can make it easier and more efficient to interpret CT (Computer Tomography) images, it has gained much attention and developed rapidly in recent years. This article reviews recent CAD techniques for pulmonary nodule detection and diagnosis in CT Images. METHODS CAD systems can be classified into computer-aided detection (CADe) and computer-aided diagnosis (CADx) systems. This review reports recent researches of both systems, including the database, technique, innovation and experimental results of each work. Multi-task CAD systems, which can handle segmentation, false positive reduction, malignancy prediction and other tasks at the same time. The commercial CAD systems are also briefly introduced. RESULTS We have found that deep learning based CAD is the mainstream of current research. The reported sensitivity of deep learning based CADe systems ranged between 80.06% and 94.1% with an average 4.3 false-positive (FP) per scan when using LIDC-IDRI dataset, and between 94.4% and 97.9% with an average 4 FP/scan when using LUNA16 dataset, respectively. The overall accuracy of deep learning based CADx systems ranged between 86.84% and 92.3% with an average AUC of 0.956 reported when using LIDC-IDRI dataset. CONCLUSIONS We summarized the current tendency and limitations as well as future challenges in this field. The development of CAD needs to meet the rigid clinical requirements, such as high accuracy, strong robustness, high efficiency, fine-grained analysis and classification, and to provide practical clinical functions. This review provides helpful information for both engineering researchers and radiologists to learn the latest development of CAD systems.
Collapse
Affiliation(s)
- Yu Wang
- School of Biomedical Engineering, Capital Medical University, Beijing, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing, China
| | - Bo Wu
- School of Biomedical Engineering, Capital Medical University, Beijing, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing, China
| | - Nan Zhang
- School of Biomedical Engineering, Capital Medical University, Beijing, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing, China
| | - Jiabao Liu
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Fei Ren
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - Liqin Zhao
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
44
|
Gao Y, Tan J, Liang Z, Li L, Huo Y. Improved computer-aided detection of pulmonary nodules via deep learning in the sinogram domain. Vis Comput Ind Biomed Art 2019; 2:15. [PMID: 32240409 PMCID: PMC7099542 DOI: 10.1186/s42492-019-0029-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2019] [Accepted: 10/16/2019] [Indexed: 12/02/2022] Open
Abstract
Computer aided detection (CADe) of pulmonary nodules plays an important role in assisting radiologists’ diagnosis and alleviating interpretation burden for lung cancer. Current CADe systems, aiming at simulating radiologists’ examination procedure, are built upon computer tomography (CT) images with feature extraction for detection and diagnosis. Human visual perception in CT image is reconstructed from sinogram, which is the original raw data acquired from CT scanner. In this work, different from the conventional image based CADe system, we propose a novel sinogram based CADe system in which the full projection information is used to explore additional effective features of nodules in the sinogram domain. Facing the challenges of limited research in this concept and unknown effective features in the sinogram domain, we design a new CADe system that utilizes the self-learning power of the convolutional neural network to learn and extract effective features from sinogram. The proposed system was validated on 208 patient cases from the publicly available online Lung Image Database Consortium database, with each case having at least one juxtapleural nodule annotation. Experimental results demonstrated that our proposed method obtained a value of 0.91 of the area under the curve (AUC) of receiver operating characteristic based on sinogram alone, comparing to 0.89 based on CT image alone. Moreover, a combination of sinogram and CT image could further improve the value of AUC to 0.92. This study indicates that pulmonary nodule detection in the sinogram domain is feasible with deep learning.
Collapse
Affiliation(s)
- Yongfeng Gao
- Department of Radiology, State University of New York, Stony Brook, NY, 11794, USA
| | - Jiaxing Tan
- Department of Radiology, State University of New York, Stony Brook, NY, 11794, USA.,Departments of Computer Science, City University of New York/CSI, Staten Island, NY, 10314, USA
| | - Zhengrong Liang
- Department of Radiology, State University of New York, Stony Brook, NY, 11794, USA.
| | - Lihong Li
- Engineering and Environmental Science, City University of New York/CSI, Staten Island,, NY, 10314, USA
| | - Yumei Huo
- Departments of Computer Science, City University of New York/CSI, Staten Island, NY, 10314, USA
| |
Collapse
|
45
|
Wang G, Li W, Aertsen M, Deprest J, Ourselin S, Vercauteren T. Aleatoric uncertainty estimation with test-time augmentation for medical image segmentation with convolutional neural networks. Neurocomputing 2019; 335:34-45. [PMID: 31595105 PMCID: PMC6783308 DOI: 10.1016/j.neucom.2019.01.103] [Citation(s) in RCA: 197] [Impact Index Per Article: 32.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
Despite the state-of-the-art performance for medical image segmentation, deep convolutional neural networks (CNNs) have rarely provided uncertainty estimations regarding their segmentation outputs, e.g., model (epistemic) and image-based (aleatoric) uncertainties. In this work, we analyze these different types of uncertainties for CNN-based 2D and 3D medical image segmentation tasks at both pixel level and structure level. We additionally propose a test-time augmentation-based aleatoric uncertainty to analyze the effect of different transformations of the input image on the segmentation output. Test-time augmentation has been previously used to improve segmentation accuracy, yet not been formulated in a consistent mathematical framework. Hence, we also propose a theoretical formulation of test-time augmentation, where a distribution of the prediction is estimated by Monte Carlo simulation with prior distributions of parameters in an image acquisition model that involves image transformations and noise. We compare and combine our proposed aleatoric uncertainty with model uncertainty. Experiments with segmentation of fetal brains and brain tumors from 2D and 3D Magnetic Resonance Images (MRI) showed that 1) the test-time augmentation-based aleatoric uncertainty provides a better uncertainty estimation than calculating the test-time dropout-based model uncertainty alone and helps to reduce overconfident incorrect predictions, and 2) our test-time augmentation outperforms a single-prediction baseline and dropout-based multiple predictions.
Collapse
Affiliation(s)
- Guotai Wang
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Wenqi Li
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | - Michael Aertsen
- Department of Radiology, University Hospitals Leuven, Leuven, Belgium
| | - Jan Deprest
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- Department of Radiology, University Hospitals Leuven, Leuven, Belgium
- Institute for Women’s Health, University College London, London, UK
- Department of Obstetrics and Gynaecology, University Hospitals Leuven, Leuven, Belgium
| | - Sébastien Ourselin
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | - Tom Vercauteren
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
- Department of Obstetrics and Gynaecology, University Hospitals Leuven, Leuven, Belgium
| |
Collapse
|
46
|
Pezeshk A, Hamidian S, Petrick N, Sahiner B. 3-D Convolutional Neural Networks for Automatic Detection of Pulmonary Nodules in Chest CT. IEEE J Biomed Health Inform 2019; 23:2080-2090. [DOI: 10.1109/jbhi.2018.2879449] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
47
|
Omigbodun AO, Noo F, McNitt‐Gray M, Hsu W, Hsieh SS. The effects of physics‐based data augmentation on the generalizability of deep neural networks: Demonstration on nodule false‐positive reduction. Med Phys 2019; 46:4563-4574. [DOI: 10.1002/mp.13755] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2019] [Revised: 07/11/2019] [Accepted: 08/01/2019] [Indexed: 12/19/2022] Open
Affiliation(s)
- Akinyinka O. Omigbodun
- Department of Radiological Sciences, David Geffen School of Medicine University of California Los Angeles Suite 650, 924 Westwood Boulevard Los Angeles CA 90024USA
| | - Frederic Noo
- Department of Radiology and Imaging Sciences The University of Utah Salt Lake City UT 84108USA
| | - Michael McNitt‐Gray
- Department of Radiological Sciences, David Geffen School of Medicine University of California Los Angeles Suite 650, 924 Westwood Boulevard Los Angeles CA 90024USA
| | - William Hsu
- Department of Radiological Sciences, David Geffen School of Medicine University of California Los Angeles Suite 420, 924 Westwood Boulevard Los Angeles CA 90024USA
| | - Scott S. Hsieh
- Department of Radiological Sciences, David Geffen School of Medicine University of California Los Angeles Suite 650, 924 Westwood Boulevard Los Angeles CA 90024USA
| |
Collapse
|
48
|
Duan L, Yuan G, Gong L, Fu T, Yang X, Chen X, Zheng J. Adversarial learning for deformable registration of brain MR image using a multi-scale fully convolutional network. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2019.101562] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
49
|
Huang X, Sun W, Tseng TL(B, Li C, Qian W. Fast and fully-automated detection and segmentation of pulmonary nodules in thoracic CT scans using deep convolutional neural networks. Comput Med Imaging Graph 2019; 74:25-36. [DOI: 10.1016/j.compmedimag.2019.02.003] [Citation(s) in RCA: 44] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2018] [Revised: 01/09/2019] [Accepted: 02/18/2019] [Indexed: 12/24/2022]
|
50
|
Nie K, Al-Hallaq H, Li XA, Benedict SH, Sohn JW, Moran JM, Fan Y, Huang M, Knopp MV, Michalski JM, Monroe J, Obcemea C, Tsien CI, Solberg T, Wu J, Xia P, Xiao Y, El Naqa I. NCTN Assessment on Current Applications of Radiomics in Oncology. Int J Radiat Oncol Biol Phys 2019; 104:302-315. [PMID: 30711529 PMCID: PMC6499656 DOI: 10.1016/j.ijrobp.2019.01.087] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2018] [Revised: 01/17/2019] [Accepted: 01/23/2019] [Indexed: 02/06/2023]
Abstract
Radiomics is a fast-growing research area based on converting standard-of-care imaging into quantitative minable data and building subsequent predictive models to personalize treatment. Radiomics has been proposed as a study objective in clinical trial concepts and a potential biomarker for stratifying patients across interventional treatment arms. In recognizing the growing importance of radiomics in oncology, a group of medical physicists and clinicians from NRG Oncology reviewed the current status of the field and identified critical issues, providing a general assessment and early recommendations for incorporation in oncology studies.
Collapse
Affiliation(s)
- Ke Nie
- Department of Radiation Oncology, Rutgers Cancer Institute of New Jersey, Rutgers-Robert Wood Johnson Medical School, New Brunswick, New Jersey.
| | - Hania Al-Hallaq
- Department of Radiation and Cellular Oncology, University of Chicago, Chicago, Illinois
| | - X Allen Li
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin
| | - Stanley H Benedict
- Department of Radiation Oncology, University of California-Davis, Sacramento, California
| | - Jason W Sohn
- Department of Radiation Oncology, Allegheny Health Network, Pittsburgh, Pennsylvania
| | - Jean M Moran
- Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan
| | - Yong Fan
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Mi Huang
- Department of Radiation Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Michael V Knopp
- Division of Imaging Science, Department of Radiology, Ohio State University, Columbus, Ohio
| | - Jeff M Michalski
- Department of Radiation Oncology, Washington University School of Medicine in St. Louis, St. Louis, Missouri
| | - James Monroe
- Department of Radiation Oncology, St. Anthony's Cancer Center, St. Louis, Missouri
| | - Ceferino Obcemea
- Radiation Research Program, National Cancer Institute, Bethesda, Maryland
| | - Christina I Tsien
- Department of Radiation Oncology, Washington University School of Medicine in St. Louis, St. Louis, Missouri
| | - Timothy Solberg
- Department of Radiation Oncology, University of California-San Francisco, San Francisco, California
| | - Jackie Wu
- Department of Radiation Oncology, Duke University, Durham, North Carolina
| | - Ping Xia
- Department of Radiation Oncology, Cleveland Clinic, Cleveland, Ohio
| | - Ying Xiao
- Department of Radiation Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Issam El Naqa
- Department of Radiation and Cellular Oncology, University of Chicago, Chicago, Illinois
| |
Collapse
|