1
|
Souza LA, Passos LA, Santana MCS, Mendel R, Rauber D, Ebigbo A, Probst A, Messmann H, Papa JP, Palm C. Layer-selective deep representation to improve esophageal cancer classification. Med Biol Eng Comput 2024; 62:3355-3372. [PMID: 38848031 DOI: 10.1007/s11517-024-03142-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Accepted: 05/25/2024] [Indexed: 10/17/2024]
Abstract
Even though artificial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their accountability and transparency level must be improved to transfer this success into clinical practice. The reliability of machine learning decisions must be explained and interpreted, especially for supporting the medical diagnosis. For this task, the deep learning techniques' black-box nature must somehow be lightened up to clarify its promising results. Hence, we aim to investigate the impact of the ResNet-50 deep convolutional design for Barrett's esophagus and adenocarcinoma classification. For such a task, and aiming at proposing a two-step learning technique, the output of each convolutional layer that composes the ResNet-50 architecture was trained and classified for further definition of layers that would provide more impact in the architecture. We showed that local information and high-dimensional features are essential to improve the classification for our task. Besides, we observed a significant improvement when the most discriminative layers expressed more impact in the training and classification of ResNet-50 for Barrett's esophagus and adenocarcinoma classification, demonstrating that both human knowledge and computational processing may influence the correct learning of such a problem.
Collapse
Affiliation(s)
- Luis A Souza
- Department of Informatics, Espírito Santo Federal University, Vitória, Brazil.
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Regensburg, Germany.
| | - Leandro A Passos
- CMI Lab, School of Engineering and Informatics, University of Wolverhampton, Wolverhampton, UK
| | | | - Robert Mendel
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Regensburg, Germany
| | - David Rauber
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Regensburg, Germany
| | - Alanna Ebigbo
- Department of Gastroenterology, University Hospital Augsburg, Augsburg, Germany
| | - Andreas Probst
- Department of Gastroenterology, University Hospital Augsburg, Augsburg, Germany
| | - Helmut Messmann
- Department of Gastroenterology, University Hospital Augsburg, Augsburg, Germany
| | - João Paulo Papa
- Department of Computing, São Paulo State University, Bauru, Brazil
| | - Christoph Palm
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Regensburg, Germany
| |
Collapse
|
2
|
Janaki R, Lakshmi D. Hybrid model-based early diagnosis of esophageal disorders using convolutional neural network and refined logistic regression. EURASIP JOURNAL ON IMAGE AND VIDEO PROCESSING 2024; 2024:19. [DOI: 10.1186/s13640-024-00634-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Accepted: 07/28/2024] [Indexed: 01/04/2025]
|
3
|
Lin Q, Tan W, Cai S, Yan B, Li J, Zhong Y. Lesion-Decoupling-Based Segmentation With Large-Scale Colon and Esophageal Datasets for Early Cancer Diagnosis. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:11142-11156. [PMID: 37028330 DOI: 10.1109/tnnls.2023.3248804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Lesions of early cancers often show flat, small, and isochromatic characteristics in medical endoscopy images, which are difficult to be captured. By analyzing the differences between the internal and external features of the lesion area, we propose a lesion-decoupling-based segmentation (LDS) network for assisting early cancer diagnosis. We introduce a plug-and-play module called self-sampling similar feature disentangling module (FDM) to obtain accurate lesion boundaries. Then, we propose a feature separation loss (FSL) function to separate pathological features from normal ones. Moreover, since physicians make diagnoses with multimodal data, we propose a multimodal cooperative segmentation network with two different modal images as input: white-light images (WLIs) and narrowband images (NBIs). Our FDM and FSL show a good performance for both single-modal and multimodal segmentations. Extensive experiments on five backbones prove that our FDM and FSL can be easily applied to different backbones for a significant lesion segmentation accuracy improvement, and the maximum increase of mean Intersection over Union (mIoU) is 4.58. For colonoscopy, we can achieve up to mIoU of 91.49 on our Dataset A and 84.41 on the three public datasets. For esophagoscopy, mIoU of 64.32 is best achieved on the WLI dataset and 66.31 on the NBI dataset.
Collapse
|
4
|
Hou M, Wang J, Liu T, Li Z, Hounye AH, Liu X, Wang K, Chen S. A graph-optimized deep learning framework for recognition of Barrett’s esophagus and reflux esophagitis. MULTIMEDIA TOOLS AND APPLICATIONS 2024; 83:83747-83767. [DOI: 10.1007/s11042-024-18910-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 01/12/2024] [Accepted: 03/11/2024] [Indexed: 01/03/2025]
|
5
|
Guidozzi N, Menon N, Chidambaram S, Markar SR. The role of artificial intelligence in the endoscopic diagnosis of esophageal cancer: a systematic review and meta-analysis. Dis Esophagus 2023; 36:doad048. [PMID: 37480192 PMCID: PMC10789250 DOI: 10.1093/dote/doad048] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 07/23/2023]
Abstract
Early detection of esophageal cancer is limited by accurate endoscopic diagnosis of subtle macroscopic lesions. Endoscopic interpretation is subject to expertise, diagnostic skill, and thus human error. Artificial intelligence (AI) in endoscopy is increasingly bridging this gap. This systematic review and meta-analysis consolidate the evidence on the use of AI in the endoscopic diagnosis of esophageal cancer. The systematic review was carried out using Pubmed, MEDLINE and Ovid EMBASE databases and articles on the role of AI in the endoscopic diagnosis of esophageal cancer management were included. A meta-analysis was also performed. Fourteen studies (1590 patients) assessed the use of AI in endoscopic diagnosis of esophageal squamous cell carcinoma-the pooled sensitivity and specificity were 91.2% (84.3-95.2%) and 80% (64.3-89.9%). Nine studies (478 patients) assessed AI capabilities of diagnosing esophageal adenocarcinoma with the pooled sensitivity and specificity of 93.1% (86.8-96.4) and 86.9% (81.7-90.7). The remaining studies formed the qualitative summary. AI technology, as an adjunct to endoscopy, can assist in accurate, early detection of esophageal malignancy. It has shown superior results to endoscopists alone in identifying early cancer and assessing depth of tumor invasion, with the added benefit of not requiring a specialized skill set. Despite promising results, the application in real-time endoscopy is limited, and further multicenter trials are required to accurately assess its use in routine practice.
Collapse
Affiliation(s)
- Nadia Guidozzi
- Department of General Surgery, University of Witwatersrand, Johannesburg, South Africa
| | - Nainika Menon
- Department of General Surgery, Oxford University Hospitals, Oxford, UK
| | - Swathikan Chidambaram
- Academic Surgical Unit, Department of Surgery and Cancer, Imperial College London, St Mary’s Hospital, London, UK
| | - Sheraz Rehan Markar
- Department of General Surgery, Oxford University Hospitals, Oxford, UK
- Nuffield Department of Surgery, University of Oxford, Oxford, UK
| |
Collapse
|
6
|
Luo D, Kuang F, Du J, Zhou M, Liu X, Luo X, Tang Y, Li B, Su S. Artificial Intelligence-Assisted Endoscopic Diagnosis of Early Upper Gastrointestinal Cancer: A Systematic Review and Meta-Analysis. Front Oncol 2022; 12:855175. [PMID: 35756602 PMCID: PMC9229174 DOI: 10.3389/fonc.2022.855175] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 04/28/2022] [Indexed: 11/17/2022] Open
Abstract
Objective The aim of this study was to assess the diagnostic ability of artificial intelligence (AI) in the detection of early upper gastrointestinal cancer (EUGIC) using endoscopic images. Methods Databases were searched for studies on AI-assisted diagnosis of EUGIC using endoscopic images. The pooled area under the curve (AUC), sensitivity, specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR), and diagnostic odds ratio (DOR) with 95% confidence interval (CI) were calculated. Results Overall, 34 studies were included in our final analysis. Among the 17 image-based studies investigating early esophageal cancer (EEC) detection, the pooled AUC, sensitivity, specificity, PLR, NLR, and DOR were 0.98, 0.95 (95% CI, 0.95–0.96), 0.95 (95% CI, 0.94–0.95), 10.76 (95% CI, 7.33–15.79), 0.07 (95% CI, 0.04–0.11), and 173.93 (95% CI, 81.79–369.83), respectively. Among the seven patient-based studies investigating EEC detection, the pooled AUC, sensitivity, specificity, PLR, NLR, and DOR were 0.98, 0.94 (95% CI, 0.91–0.96), 0.90 (95% CI, 0.88–0.92), 6.14 (95% CI, 2.06–18.30), 0.07 (95% CI, 0.04–0.11), and 69.13 (95% CI, 14.73–324.45), respectively. Among the 15 image-based studies investigating early gastric cancer (EGC) detection, the pooled AUC, sensitivity, specificity, PLR, NLR, and DOR were 0.94, 0.87 (95% CI, 0.87–0.88), 0.88 (95% CI, 0.87–0.88), 7.20 (95% CI, 4.32–12.00), 0.14 (95% CI, 0.09–0.23), and 48.77 (95% CI, 24.98–95.19), respectively. Conclusions On the basis of our meta-analysis, AI exhibited high accuracy in diagnosis of EUGIC. Systematic Review Registration https://www.crd.york.ac.uk/PROSPERO/, identifier PROSPERO (CRD42021270443).
Collapse
Affiliation(s)
- De Luo
- Department of Hepatobiliary Surgery, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Fei Kuang
- Department of General Surgery, Changhai Hospital of The Second Military Medical University, Shanghai, China
| | - Juan Du
- Department of Clinical Medicine, Southwest Medical University, Luzhou, China
| | - Mengjia Zhou
- Department of Ultrasound, Seventh People's Hospital of Shanghai University of Traditional Chinese Medicine, Shanghai, China
| | - Xiangdong Liu
- Department of Hepatobiliary Surgery, Zigong Fourth People's Hospital, Zigong, China
| | - Xinchen Luo
- Department of Gastroenterology, Zigong Third People's Hospital, Zigong, China
| | - Yong Tang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Bo Li
- Department of Hepatobiliary Surgery, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Song Su
- Department of Hepatobiliary Surgery, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| |
Collapse
|
7
|
Azam MA, Sampieri C, Ioppi A, Benzi P, Giordano GG, De Vecchi M, Campagnari V, Li S, Guastini L, Paderno A, Moccia S, Piazza C, Mattos LS, Peretti G. Videomics of the Upper Aero-Digestive Tract Cancer: Deep Learning Applied to White Light and Narrow Band Imaging for Automatic Segmentation of Endoscopic Images. Front Oncol 2022; 12:900451. [PMID: 35719939 PMCID: PMC9198427 DOI: 10.3389/fonc.2022.900451] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Accepted: 04/26/2022] [Indexed: 12/13/2022] Open
Abstract
Introduction Narrow Band Imaging (NBI) is an endoscopic visualization technique useful for upper aero-digestive tract (UADT) cancer detection and margins evaluation. However, NBI analysis is strongly operator-dependent and requires high expertise, thus limiting its wider implementation. Recently, artificial intelligence (AI) has demonstrated potential for applications in UADT videoendoscopy. Among AI methods, deep learning algorithms, and especially convolutional neural networks (CNNs), are particularly suitable for delineating cancers on videoendoscopy. This study is aimed to develop a CNN for automatic semantic segmentation of UADT cancer on endoscopic images. Materials and Methods A dataset of white light and NBI videoframes of laryngeal squamous cell carcinoma (LSCC) was collected and manually annotated. A novel DL segmentation model (SegMENT) was designed. SegMENT relies on DeepLabV3+ CNN architecture, modified using Xception as a backbone and incorporating ensemble features from other CNNs. The performance of SegMENT was compared to state-of-the-art CNNs (UNet, ResUNet, and DeepLabv3). SegMENT was then validated on two external datasets of NBI images of oropharyngeal (OPSCC) and oral cavity SCC (OSCC) obtained from a previously published study. The impact of in-domain transfer learning through an ensemble technique was evaluated on the external datasets. Results 219 LSCC patients were retrospectively included in the study. A total of 683 videoframes composed the LSCC dataset, while the external validation cohorts of OPSCC and OCSCC contained 116 and 102 images. On the LSCC dataset, SegMENT outperformed the other DL models, obtaining the following median values: 0.68 intersection over union (IoU), 0.81 dice similarity coefficient (DSC), 0.95 recall, 0.78 precision, 0.97 accuracy. For the OCSCC and OPSCC datasets, results were superior compared to previously published data: the median performance metrics were, respectively, improved as follows: DSC=10.3% and 11.9%, recall=15.0% and 5.1%, precision=17.0% and 14.7%, accuracy=4.1% and 10.3%. Conclusion SegMENT achieved promising performances, showing that automatic tumor segmentation in endoscopic images is feasible even within the highly heterogeneous and complex UADT environment. SegMENT outperformed the previously published results on the external validation cohorts. The model demonstrated potential for improved detection of early tumors, more precise biopsies, and better selection of resection margins.
Collapse
Affiliation(s)
- Muhammad Adeel Azam
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Claudio Sampieri
- Unit of Otorhinolaryngology - Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy.,Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| | - Alessandro Ioppi
- Unit of Otorhinolaryngology - Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy.,Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| | - Pietro Benzi
- Unit of Otorhinolaryngology - Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy.,Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| | - Giorgio Gregory Giordano
- Unit of Otorhinolaryngology - Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy.,Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| | - Marta De Vecchi
- Unit of Otorhinolaryngology - Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy.,Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| | - Valentina Campagnari
- Unit of Otorhinolaryngology - Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy.,Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| | - Shunlei Li
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Luca Guastini
- Unit of Otorhinolaryngology - Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy.,Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| | - Alberto Paderno
- Unit of Otorhinolaryngology - Head and Neck Surgery, ASST Spedali Civili of Brescia, Brescia, Italy.,Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, Brescia, Italy
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pisa, Italy
| | - Cesare Piazza
- Unit of Otorhinolaryngology - Head and Neck Surgery, ASST Spedali Civili of Brescia, Brescia, Italy.,Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, Brescia, Italy
| | - Leonardo S Mattos
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Giorgio Peretti
- Unit of Otorhinolaryngology - Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy.,Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| |
Collapse
|
8
|
Maslyonkina KS, Konyukova AK, Alexeeva DY, Sinelnikov MY, Mikhaleva LM. Barrett's esophagus: The pathomorphological and molecular genetic keystones of neoplastic progression. Cancer Med 2022; 11:447-478. [PMID: 34870375 PMCID: PMC8729054 DOI: 10.1002/cam4.4447] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 11/07/2021] [Accepted: 11/09/2021] [Indexed: 02/06/2023] Open
Abstract
Barrett's esophagus is a widespread chronically progressing disease of heterogeneous nature. A life threatening complication of this condition is neoplastic transformation, which is often overlooked due to lack of standardized approaches in diagnosis, preventative measures and treatment. In this essay, we aim to stratify existing data to show specific associations between neoplastic transformation and the underlying processes which predate cancerous transition. We discuss pathomorphological, genetic, epigenetic, molecular and immunohistochemical methods related to neoplasia detection on the basis of Barrett's esophagus. Our review sheds light on pathways of such neoplastic progression in the distal esophagus, providing valuable insight into progression assessment, preventative targets and treatment modalities. Our results suggest that molecular, genetic and epigenetic alterations in the esophagus arise earlier than cancerous transformation, meaning the discussed targets can help form preventative strategies in at-risk patient groups.
Collapse
|
9
|
Pan W, Li X, Wang W, Zhou L, Wu J, Ren T, Liu C, Lv M, Su S, Tang Y. Identification of Barrett's esophagus in endoscopic images using deep learning. BMC Gastroenterol 2021; 21:479. [PMID: 34920705 PMCID: PMC8684213 DOI: 10.1186/s12876-021-02055-2] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Accepted: 12/06/2021] [Indexed: 02/08/2023] Open
Abstract
BACKGROUND Development of a deep learning method to identify Barrett's esophagus (BE) scopes in endoscopic images. METHODS 443 endoscopic images from 187 patients of BE were included in this study. The gastroesophageal junction (GEJ) and squamous-columnar junction (SCJ) of BE were manually annotated in endoscopic images by experts. Fully convolutional neural networks (FCN) were developed to automatically identify the BE scopes in endoscopic images. The networks were trained and evaluated in two separate image sets. The performance of segmentation was evaluated by intersection over union (IOU). RESULTS The deep learning method was proved to be satisfying in the automated identification of BE in endoscopic images. The values of the IOU were 0.56 (GEJ) and 0.82 (SCJ), respectively. CONCLUSIONS Deep learning algorithm is promising with accuracies of concordance with manual human assessment in segmentation of the BE scope in endoscopic images. This automated recognition method helps clinicians to locate and recognize the scopes of BE in endoscopic examinations.
Collapse
Affiliation(s)
- Wen Pan
- Department of Digestion, West China Hospital of Sichuan University, Chengdu, 610054, Sichuan, China
- Department of Digestion, The Hospital of Chengdu Office of People's Government of Tibetan Autonomous Region, Ximianqiao Street No.20, Chengdu, 610054, Sichuan, China
| | - Xujia Li
- Department of General Surgery (Hepatobiliary Surgery), The Affiliated Hospital of Southwest Medical University, Taiping Street No.25, Luzhou, 646000, Sichuan, China
| | - Weijia Wang
- School of Information and Software Engineering, University of Electronic Science and Technology of China, 4 North Jianshe Road, Chengdu, 610054, Sichuan, China
| | - Linjing Zhou
- School of Information and Software Engineering, University of Electronic Science and Technology of China, 4 North Jianshe Road, Chengdu, 610054, Sichuan, China
| | - Jiali Wu
- Department of Anesthesiology, The Affiliated Hospital of Southwest Medical University, Taiping Street No.25, Luzhou, 646000, Sichuan, China
| | - Tao Ren
- Department of Digestion, The Hospital of Chengdu Office of People's Government of Tibetan Autonomous Region, Ximianqiao Street No.20, Chengdu, 610054, Sichuan, China
| | - Chao Liu
- Department of Digestion, The Hospital of Chengdu Office of People's Government of Tibetan Autonomous Region, Ximianqiao Street No.20, Chengdu, 610054, Sichuan, China.
| | - Muhan Lv
- Department of Digestion, The Affiliated Hospital of Southwest Medical University, Taiping Street No.25, Luzhou, 646000, Sichuan, China.
| | - Song Su
- Department of General Surgery (Hepatobiliary Surgery), The Affiliated Hospital of Southwest Medical University, Taiping Street No.25, Luzhou, 646000, Sichuan, China.
| | - Yong Tang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, 4 North Jianshe Road, Chengdu, 610054, Sichuan, China.
| |
Collapse
|
10
|
Nawab K, Athwani R, Naeem A, Hamayun M, Wazir M. A Review of Applications of Artificial Intelligence in Gastroenterology. Cureus 2021; 13:e19235. [PMID: 34877212 PMCID: PMC8642128 DOI: 10.7759/cureus.19235] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/03/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) is the science that deals with creating 'intelligent machines'. AI has revolutionized medicine because of its application in several fields across medicine like radiology, neurology, ophthalmology, orthopedics and gastroenterology. In this review, we intend to summarize the basics of AI, the application of AI in various gastrointestinal pathologies till date as well as challenges/ problems related to the application of AI in medicine. Literature search using keywords like artificial intelligence, gastroenterology, applications, etc. were used. The literature search was done using Google Scholar, PubMed and ScienceDirect. All the relevant articles were gathered and relevant data were extracted from them. We concluded AI has achieved major feats in the past few decades. It has helped clinicians in diagnosing complex diseases, managing treatments as well as in predicting outcomes, all in all, which helps doctors from all over the globe in dispensing better healthcare services.
Collapse
Affiliation(s)
- Khalid Nawab
- Internal Medicine, Penn State Holy Spirit Hospital, Camp Hill, USA
| | - Ravi Athwani
- Internal Medicine, Penn State Holy Spirit Hospital, Camp Hill, USA
| | - Awais Naeem
- Internal Medicine, Khyber Medical University, Peshawar, PAK
| | | | - Momna Wazir
- Internal Medicine, Hayatabad Medical Complex, Peshawar, PAK
| |
Collapse
|
11
|
Ghatwary N, Zolgharni M, Janan F, Ye X. Learning Spatiotemporal Features for Esophageal Abnormality Detection From Endoscopic Videos. IEEE J Biomed Health Inform 2021; 25:131-142. [PMID: 32750901 DOI: 10.1109/jbhi.2020.2995193] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Esophageal cancer is categorized as a type of disease with a high mortality rate. Early detection of esophageal abnormalities (i.e. precancerous and early cancerous) can improve the survival rate of the patients. Recent deep learning-based methods for selected types of esophageal abnormality detection from endoscopic images have been proposed. However, no methods have been introduced in the literature to cover the detection from endoscopic videos, detection from challenging frames and detection of more than one esophageal abnormality type. In this paper, we present an efficient method to automatically detect different types of esophageal abnormalities from endoscopic videos. We propose a novel 3D Sequential DenseConvLstm network that extracts spatiotemporal features from the input video. Our network incorporates 3D Convolutional Neural Network (3DCNN) and Convolutional Lstm (ConvLstm) to efficiently learn short and long term spatiotemporal features. The generated feature map is utilized by a region proposal network and ROI pooling layer to produce a bounding box that detects abnormality regions in each frame throughout the video. Finally, we investigate a post-processing method named Frame Search Conditional Random Field (FS-CRF) that improves the overall performance of the model by recovering the missing regions in neighborhood frames within the same clip. We extensively validate our model on an endoscopic video dataset that includes a variety of esophageal abnormalities. Our model achieved high performance using different evaluation metrics showing 93.7% recall, 92.7% precision, and 93.2% F-measure. Moreover, as no results have been reported in the literature for the esophageal abnormality detection from endoscopic videos, to validate the robustness of our model, we have tested the model on a publicly available colonoscopy video dataset, achieving the polyp detection performance in a recall of 81.18%, precision of 96.45% and F-measure 88.16%, compared to the state-of-the-art results of 78.84% recall, 90.51% precision and 84.27% F-measure using the same dataset. This demonstrates that the proposed method can be adapted to different gastrointestinal endoscopic video applications with a promising performance.
Collapse
|
12
|
Syed T, Doshi A, Guleria S, Syed S, Shah T. Artificial Intelligence and Its Role in Identifying Esophageal Neoplasia. Dig Dis Sci 2020; 65:3448-3455. [PMID: 33057945 PMCID: PMC8139616 DOI: 10.1007/s10620-020-06643-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 09/26/2020] [Indexed: 12/15/2022]
Abstract
Randomized trials have demonstrated that ablation of dysplastic Barrett's esophagus can reduce the risk of progression to cancer. Endoscopic resection for early stage esophageal adenocarcinoma and squamous cell carcinoma can significantly reduce postoperative morbidity compared to esophagectomy. Unfortunately, current endoscopic surveillance technologies (e.g., high-definition white light, electronic, and dye-based chromoendoscopy) lack sensitivity at identifying subtle areas of dysplasia and cancer. Random biopsies sample only approximately 5% of the esophageal mucosa at risk, and there is poor agreement among pathologists in identifying low-grade dysplasia. Machine-based deep learning medical image and video assessment technologies have progressed significantly in recent years, enabled in large part by advances in computer processing capabilities. In deep learning, sequential layers allow models to transform input data (e.g., pixels for imaging data) into a composite representation that allows for classification and feature identification. Several publications have attempted to use this technology to help identify dysplasia and early esophageal cancer. The aims of this reviews are as follows: (a) discussing limitations in our current strategies to identify esophageal dysplasia and cancer, (b) explaining the concepts behind deep learning and convolutional neural networks using language appropriate for clinicians without an engineering background, (c) systematically reviewing the literature for studies that have used deep learning to identify esophageal neoplasia, and (d) based on the systemic review, outlining strategies on further work necessary before these technologies are ready for "prime-time," i.e., use in routine clinical care.
Collapse
Affiliation(s)
- Taseen Syed
- Division of Gastroenterology, Virginia Commonwealth University Health System, 1200 East Marshall St, PO Box 980711, Richmond, VA, 23298, USA. .,Division of Gastroenterology, Hunter Holmes McGuire Veterans Affairs Medical Center, Richmond, VA, USA.
| | - Akash Doshi
- University of Miami Miller School of Medicine, Miami, FL, USA
| | - Shan Guleria
- Department of Medicine, Rush University Medical Center, Chicago, IL, USA
| | - Sana Syed
- Department of Pediatrics, Division of Gastroenterology, Hepatology and Nutrition, University of Virginia School of Medicine and UVA Child Health Research Center, Charlottesville, VA, USA
| | - Tilak Shah
- Division of Gastroenterology, Virginia Commonwealth University Health System, 1200 East Marshall St, PO Box 980711, Richmond, VA, 23298, USA.,Division of Gastroenterology, Hunter Holmes McGuire Veterans Affairs Medical Center, Richmond, VA, USA
| |
Collapse
|
13
|
Viswanath YKS, Vaze S, Bird R. Application of convolutional neural networks for computer-aided detection and diagnosis in gastrointestinal pathology: A simplified exposition for an endoscopist. Artif Intell Gastrointest Endosc 2020; 1:1-5. [DOI: 10.37126/aige.v1.i1.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Revised: 07/14/2020] [Accepted: 07/16/2020] [Indexed: 02/06/2023] Open
|
14
|
Morreale GC, Sinagra E, Vitello A, Shahini E, Shahini E, Maida M. Emerging artificia intelligence applications in gastroenterology: A review of the literature. Artif Intell Gastrointest Endosc 2020; 1:6-18. [DOI: 10.37126/aige.v1.i1.6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Revised: 07/07/2020] [Accepted: 07/16/2020] [Indexed: 02/06/2023] Open
|
15
|
Morreale GC, Sinagra E, Vitello A, Shahini E, Shahini E, Maida M. Emerging artificia intelligence applications in gastroenterology: A review of the literature. Artif Intell Gastrointest Endosc 2020. [DOI: 10.37126/wjem.v1.i1.19] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
|
16
|
Lazăr DC, Avram MF, Faur AC, Goldiş A, Romoşan I, Tăban S, Cornianu M. The Impact of Artificial Intelligence in the Endoscopic Assessment of Premalignant and Malignant Esophageal Lesions: Present and Future. MEDICINA (KAUNAS, LITHUANIA) 2020; 56:364. [PMID: 32708343 PMCID: PMC7404688 DOI: 10.3390/medicina56070364] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/13/2020] [Revised: 07/13/2020] [Accepted: 07/16/2020] [Indexed: 02/07/2023]
Abstract
In the gastroenterology field, the impact of artificial intelligence was investigated for the purposes of diagnostics, risk stratification of patients, improvement in quality of endoscopic procedures and early detection of neoplastic diseases, implementation of the best treatment strategy, and optimization of patient prognosis. Computer-assisted diagnostic systems to evaluate upper endoscopy images have recently emerged as a supporting tool in endoscopy due to the risks of misdiagnosis related to standard endoscopy and different expertise levels of endoscopists, time-consuming procedures, lack of availability of advanced procedures, increasing workloads, and development of endoscopic mass screening programs. Recent research has tended toward computerized, automatic, and real-time detection of lesions, which are approaches that offer utility in daily practice. Despite promising results, certain studies might overexaggerate the diagnostic accuracy of artificial systems, and several limitations remain to be overcome in the future. Therefore, additional multicenter randomized trials and the development of existent database platforms are needed to certify clinical implementation. This paper presents an overview of the literature and the current knowledge of the usefulness of different types of machine learning systems in the assessment of premalignant and malignant esophageal lesions via conventional and advanced endoscopic procedures. This study makes a presentation of the artificial intelligence terminology and refers also to the most prominent recent research on computer-assisted diagnosis of neoplasia on Barrett's esophagus and early esophageal squamous cell carcinoma, and prediction of invasion depth in esophageal neoplasms. Furthermore, this review highlights the main directions of future doctor-computer collaborations in which machines are expected to improve the quality of medical action and routine clinical workflow, thus reducing the burden on physicians.
Collapse
Affiliation(s)
- Daniela Cornelia Lazăr
- Department V of Internal Medicine I, Discipline of Internal Medicine IV, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Romania, Eftimie Murgu Sq. no. 2, 300041 Timișoara, Romania; (D.C.L.); (I.R.)
| | - Mihaela Flavia Avram
- Department of Surgery X, 1st Surgery Discipline, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Romania, Eftimie Murgu Sq. no. 2, 300041 Timișoara, Romania
| | - Alexandra Corina Faur
- Department I, Discipline of Anatomy and Embriology, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Romania, Eftimie Murgu Sq. no. 2, 300041 Timișoara, Romania;
| | - Adrian Goldiş
- Department VII of Internal Medicine II, Discipline of Gastroenterology and Hepatology, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Romania, Eftimie Murgu Sq. no. 2, 300041 Timișoara, Romania;
| | - Ioan Romoşan
- Department V of Internal Medicine I, Discipline of Internal Medicine IV, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Romania, Eftimie Murgu Sq. no. 2, 300041 Timișoara, Romania; (D.C.L.); (I.R.)
| | - Sorina Tăban
- Department II of Microscopic Morphology, Discipline of Pathology, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Romania, Eftimie Murgu Sq. no. 2, 300041 Timișoara, Romania; (S.T.); (M.C.)
| | - Mărioara Cornianu
- Department II of Microscopic Morphology, Discipline of Pathology, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Romania, Eftimie Murgu Sq. no. 2, 300041 Timișoara, Romania; (S.T.); (M.C.)
| |
Collapse
|
17
|
Ebigbo A, Mendel R, Probst A, Manzeneder J, Prinz F, de Souza Jr. LA, Papa J, Palm C, Messmann H. Real-time use of artificial intelligence in the evaluation of cancer in Barrett's oesophagus. Gut 2020; 69:615-616. [PMID: 31541004 PMCID: PMC7063447 DOI: 10.1136/gutjnl-2019-319460] [Citation(s) in RCA: 121] [Impact Index Per Article: 24.2] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 08/30/2019] [Accepted: 09/08/2019] [Indexed: 12/12/2022]
Affiliation(s)
- Alanna Ebigbo
- Department of Gastroenterology, Universitätsklinikum Augsburg, Augsburg, Germany
| | - Robert Mendel
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Regensburg, Germany,Regensburg Center of Health Sciences and Technology, OTH Regensburg, Regensburg, Germany
| | - Andreas Probst
- Department of Gastroenterology, Universitätsklinikum Augsburg, Augsburg, Germany
| | - Johannes Manzeneder
- Department of Gastroenterology, Universitätsklinikum Augsburg, Augsburg, Germany
| | - Friederike Prinz
- Department of Gastroenterology, Universitätsklinikum Augsburg, Augsburg, Germany
| | - Luis A de Souza Jr.
- Department of Computing, Federal University of São Carlos, São Carlos, Brazil
| | - Joao Papa
- Department of Computing, São Paulo State University, Bauru, Brazil
| | - Christoph Palm
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Regensburg, Germany .,Regensburg Center of Health Sciences and Technology, OTH Regensburg, Regensburg, Germany
| | - Helmut Messmann
- Department of Gastroenterology, Universitätsklinikum Augsburg, Augsburg, Germany
| |
Collapse
|
18
|
Moccia S, Romeo L, Migliorelli L, Frontoni E, Zingaretti P. Supervised CNN Strategies for Optical Image Segmentation and Classification in Interventional Medicine. INTELLIGENT SYSTEMS REFERENCE LIBRARY 2020. [DOI: 10.1007/978-3-030-42750-4_8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
19
|
Tong L, Wu H, Wang MD. CAESNet: Convolutional AutoEncoder based Semi-supervised Network for improving multiclass classification of endomicroscopic images. J Am Med Inform Assoc 2019; 26:1286-1296. [PMID: 31260038 PMCID: PMC6798571 DOI: 10.1093/jamia/ocz089] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2018] [Revised: 04/17/2019] [Accepted: 06/09/2019] [Indexed: 12/11/2022] Open
Abstract
OBJECTIVE This article presents a novel method of semisupervised learning using convolutional autoencoders for optical endomicroscopic images. Optical endomicroscopy (OE) is a newly emerged biomedical imaging modality that can support real-time clinical decisions for the grade of dysplasia. To enable real-time decision making, computer-aided diagnosis (CAD) is essential for its high speed and objectivity. However, traditional supervised CAD requires a large amount of training data. Compared with the limited number of labeled images, we can collect a larger number of unlabeled images. To utilize these unlabeled images, we have developed a Convolutional AutoEncoder based Semi-supervised Network (CAESNet) for improving the classification performance. MATERIALS AND METHODS We applied our method to an OE dataset collected from patients undergoing endoscope-based confocal laser endomicroscopy procedures for Barrett's esophagus at Emory Hospital, which consists of 429 labeled images and 2826 unlabeled images. Our CAESNet consists of an encoder with 5 convolutional layers, a decoder with 5 transposed convolutional layers, and a classification network with 2 fully connected layers and a softmax layer. In the unsupervised stage, we first update the encoder and decoder with both labeled and unlabeled images to learn an efficient feature representation. In the supervised stage, we further update the encoder and the classification network with only labeled images for multiclass classification of the OE images. RESULTS Our proposed semisupervised method CAESNet achieves the best average performance for multiclass classification of OE images, which surpasses the performance of supervised methods including standard convolutional networks and convolutional autoencoder network. CONCLUSIONS Our semisupervised CAESNet can efficiently utilize the unlabeled OE images, which improves the diagnosis and decision making for patients with Barrett's esophagus.
Collapse
Affiliation(s)
- Li Tong
- Department of Biomedical Engineering, Georgia Institute of Technology, Emory University, Atlanta, Georgia, USA
| | - Hang Wu
- Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, Georgia, USA
| | - May D Wang
- Department of Biomedical Engineering, Georgia Institute of Technology, Emory University, Atlanta, Georgia, USA
- Departments of Electrical and Computer Engineering, Computational Science and Engineering, Winship Cancer Institute, Parker H. Petit Institute for Bioengineering and Biosciences, Institute of People and Technology, Georgia Institute of Technology and Emory University, Atlanta, Georgia, USA
| |
Collapse
|
20
|
Early esophageal adenocarcinoma detection using deep learning methods. Int J Comput Assist Radiol Surg 2019; 14:611-621. [PMID: 30666547 PMCID: PMC6420905 DOI: 10.1007/s11548-019-01914-4] [Citation(s) in RCA: 65] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2018] [Accepted: 01/07/2019] [Indexed: 02/08/2023]
Abstract
Purpose This study aims to adapt and evaluate the performance of different state-of-the-art deep learning object detection methods to automatically identify esophageal adenocarcinoma (EAC) regions from high-definition white light endoscopy (HD-WLE) images. Method Several state-of-the-art object detection methods using Convolutional Neural Networks (CNNs) were adapted to automatically detect abnormal regions in the esophagus HD-WLE images, utilizing VGG’16 as the backbone architecture for feature extraction. Those methods are Regional-based Convolutional Neural Network (R-CNN), Fast R-CNN, Faster R-CNN and Single-Shot Multibox Detector (SSD). For the evaluation of the different methods, 100 images from 39 patients that have been manually annotated by five experienced clinicians as ground truth have been tested. Results Experimental results illustrate that the SSD and Faster R-CNN networks show promising results, and the SSD outperforms other methods achieving a sensitivity of 0.96, specificity of 0.92 and F-measure of 0.94. Additionally, the Average Recall Rate of the Faster R-CNN in locating the EAC region accurately is 0.83. Conclusion In this paper, recent deep learning object detection methods are adapted to detect esophageal abnormalities automatically. The evaluation of the methods proved its ability to locate abnormal regions in the esophagus from endoscopic images. The automatic detection is a crucial step that may help early detection and treatment of EAC and also can improve automatic tumor segmentation to monitor its growth and treatment outcome.
Collapse
|