1
|
Gupta A, Bajaj S, Nema P, Purohit A, Kashaw V, Soni V, Kashaw SK. Potential of AI and ML in oncology research including diagnosis, treatment and future directions: A comprehensive prospective. Comput Biol Med 2025; 189:109918. [PMID: 40037170 DOI: 10.1016/j.compbiomed.2025.109918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 02/21/2025] [Accepted: 02/23/2025] [Indexed: 03/06/2025]
Abstract
Artificial intelligence (AI) and machine learning (ML) have emerged as transformative tools in cancer research, offering the ability to process huge data rapidly and make precise therapeutic decisions. Over the last decade, AI, particularly deep learning (DL) and machine learning (ML), has significantly enhanced cancer prediction, diagnosis, and treatment by leveraging algorithms such as convolutional neural networks (CNNs) and multi-layer perceptrons (MLPs). These technologies provide reliable, efficient solutions for managing aggressive diseases like cancer, which have high recurrence and mortality rates. This review prospective highlights the applications of AI in oncology, a long with FDA-approved technologies like EFAI RTSuite CT HN-Segmentation System, Quantib Prostate, and Paige Prostate, and explore their role in advancing cancer detection, personalized care, and treatment. Furthermore, we also explored broader applications of AI in healthcare, addressing challenges, limitations, regulatory considerations, and ethical implications. By presenting these advancements, we underscore AI's potential to revolutionize cancer care, management and treatment.
Collapse
Affiliation(s)
- Akanksha Gupta
- Integrated Drug Discovery Research Laboratory, Department of Pharmaceutical Sciences, Dr. Harisingh Gour University (A Central University), Sagar, Madya Pradesh, 470003, India.
| | - Samyak Bajaj
- Integrated Drug Discovery Research Laboratory, Department of Pharmaceutical Sciences, Dr. Harisingh Gour University (A Central University), Sagar, Madya Pradesh, 470003, India.
| | - Priyanshu Nema
- Integrated Drug Discovery Research Laboratory, Department of Pharmaceutical Sciences, Dr. Harisingh Gour University (A Central University), Sagar, Madya Pradesh, 470003, India.
| | - Arpana Purohit
- Integrated Drug Discovery Research Laboratory, Department of Pharmaceutical Sciences, Dr. Harisingh Gour University (A Central University), Sagar, Madya Pradesh, 470003, India.
| | - Varsha Kashaw
- Sagar Institute of Pharmaceutical Sciences, Sagar, M.P., India.
| | - Vandana Soni
- Integrated Drug Discovery Research Laboratory, Department of Pharmaceutical Sciences, Dr. Harisingh Gour University (A Central University), Sagar, Madya Pradesh, 470003, India.
| | - Sushil K Kashaw
- Integrated Drug Discovery Research Laboratory, Department of Pharmaceutical Sciences, Dr. Harisingh Gour University (A Central University), Sagar, Madya Pradesh, 470003, India.
| |
Collapse
|
2
|
Jiang Q, Yu Y, Ren Y, Li S, He X. A review of deep learning methods for gastrointestinal diseases classification applied in computer-aided diagnosis system. Med Biol Eng Comput 2025; 63:293-320. [PMID: 39343842 DOI: 10.1007/s11517-024-03203-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Accepted: 09/12/2024] [Indexed: 10/01/2024]
Abstract
Recent advancements in deep learning have significantly improved the intelligent classification of gastrointestinal (GI) diseases, particularly in aiding clinical diagnosis. This paper seeks to review a computer-aided diagnosis (CAD) system for GI diseases, aligning with the actual clinical diagnostic process. It offers a comprehensive survey of deep learning (DL) techniques tailored for classifying GI diseases, addressing challenges inherent in complex scenes, clinical constraints, and technical obstacles encountered in GI imaging. Firstly, the esophagus, stomach, small intestine, and large intestine were located to determine the organs where the lesions were located. Secondly, location detection and classification of a single disease are performed on the premise that the organ's location corresponding to the image is known. Finally, comprehensive classification for multiple diseases is carried out. The results of single and multi-classification are compared to achieve more accurate classification outcomes, and a more effective computer-aided diagnosis system for gastrointestinal diseases was further constructed.
Collapse
Affiliation(s)
- Qianru Jiang
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, Zhejiang, P.R. China
| | - Yulin Yu
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, Zhejiang, P.R. China
| | - Yipei Ren
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, Zhejiang, P.R. China
| | - Sheng Li
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, Zhejiang, P.R. China
| | - Xiongxiong He
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, Zhejiang, P.R. China.
| |
Collapse
|
3
|
Raju ASN, Venkatesh K, Gatla RK, Konakalla EP, Eid MM, Titova N, Ghoneim SSM, Ghaly RNR. Colorectal cancer detection with enhanced precision using a hybrid supervised and unsupervised learning approach. Sci Rep 2025; 15:3180. [PMID: 39863646 PMCID: PMC11763007 DOI: 10.1038/s41598-025-86590-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2024] [Accepted: 01/13/2025] [Indexed: 01/27/2025] Open
Abstract
The current work introduces the hybrid ensemble framework for the detection and segmentation of colorectal cancer. This framework will incorporate both supervised classification and unsupervised clustering methods to present more understandable and accurate diagnostic results. The method entails several steps with CNN models: ADa-22 and AD-22, transformer networks, and an SVM classifier, all inbuilt. The CVC ClinicDB dataset supports this process, containing 1650 colonoscopy images classified as polyps or non-polyps. The best performance in the ensembles was done by the AD-22 + Transformer + SVM model, with an AUC of 0.99, a training accuracy of 99.50%, and a testing accuracy of 99.00%. This group also saw a high accuracy of 97.50% for Polyps and 99.30% for Non-Polyps, together with a recall of 97.80% for Polyps and 98.90% for Non-Polyps, hence performing very well in identifying both cancerous and healthy regions. The framework proposed here uses K-means clustering in combination with the visualisation of bounding boxes, thereby improving segmentation and yielding a silhouette score of 0.73 with the best cluster configuration. It discusses how to combine feature interpretation challenges into medical imaging for accurate localization and precise segmentation of malignant regions. A good balance between performance and generalization shall be done by hyperparameter optimization-heavy learning rates; dropout rates and overfitting shall be suppressed effectively. The hybrid schema of this work treats the deficiencies of the previous approaches, such as incorporating CNN-based effective feature extraction, Transformer networks for developing attention mechanisms, and finally the fine decision boundary of the support vector machine. Further, we refine this process via unsupervised clustering for the purpose of enhancing the visualisation of such a procedure. Such a holistic framework, hence, further boosts classification and segmentation results by generating understandable outcomes for more rigorous benchmarking of detecting colorectal cancer and with higher reality towards clinical application feasibility.
Collapse
Affiliation(s)
- Akella S Narasimha Raju
- Department of Computer Science and Engineering (Data Science), Institute of Aeronautical Engineering, Dundigul, Hyderabad, Telangana, 500043, India.
| | - K Venkatesh
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai, Tamilnadu, 603203, India.
| | - Ranjith Kumar Gatla
- Department of Computer Science and Engineering (Data Science), Institute of Aeronautical Engineering, Dundigul, Hyderabad, Telangana, 500043, India
| | - Eswara Prasad Konakalla
- Department of Physics and Electronics, B.V.Raju College, Bhimavaram, Garagaparru Road, Kovvada, Andhra Pradesh, 534202, India
| | - Marwa M Eid
- College of Applied Medical Science, Taif University, 21944, Taif, Saudi Arabia
| | - Nataliia Titova
- Biomedical Engineering Department, National University Odesa Polytechnic, Odesa, 65044, Ukraine.
| | - Sherif S M Ghoneim
- Department of Electrical Engineering, College of Engineering, Taif University, 21944, Taif, Saudi Arabia
| | - Ramy N R Ghaly
- Ministry of Higher Education, Mataria Technical College, Cairo, 11718, Egypt
- Chitkara Centre for Research and Development, Chitkara University, Solan, Himachal Pradesh, 174103, India
| |
Collapse
|
4
|
Raju ASN, Venkatesh K, Padmaja B, Kumar CHNS, Patnala PRM, Lasisi A, Islam S, Razak A, Khan WA. Exploring vision transformers and XGBoost as deep learning ensembles for transforming carcinoma recognition. Sci Rep 2024; 14:30052. [PMID: 39627293 PMCID: PMC11614869 DOI: 10.1038/s41598-024-81456-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2024] [Accepted: 11/26/2024] [Indexed: 12/06/2024] Open
Abstract
Early detection of colorectal carcinoma (CRC), one of the most prevalent forms of cancer worldwide, significantly enhances the prognosis of patients. This research presents a new method for improving CRC detection using a deep learning ensemble with the Computer Aided Diagnosis (CADx). The method involves combining pre-trained convolutional neural network (CNN) models, such as ADaRDEV2I-22, DaRD-22, and ADaDR-22, using Vision Transformers (ViT) and XGBoost. The study addresses the challenges associated with imbalanced datasets and the necessity of sophisticated feature extraction in medical image analysis. Initially, the CKHK-22 dataset comprised 24 classes. However, we refined it to 14 classes, which led to an improvement in data balance and quality. This improvement enabled more precise feature extraction and improved classification results. We created two ensemble models: the first model used Vision Transformers to capture long-range spatial relationships in the images, while the second model combined CNNs with XGBoost to facilitate structured data classification. We implemented DCGAN-based augmentation to enhance the dataset's diversity. The tests showed big improvements in performance, with the ADaDR-22 + Vision Transformer group getting the best results, with a testing accuracy of 93.4% and an AUC of 98.8%. In contrast, the ADaDR-22 + XGBoost model had an AUC of 97.8% and an accuracy of 92.2%. These findings highlight the efficacy of the proposed ensemble models in detecting CRC and highlight the importance of using well-balanced, high-quality datasets. The proposed method significantly enhances the clinical diagnostic accuracy and the capabilities of medical image analysis or early CRC detection.
Collapse
Affiliation(s)
- Akella Subrahmanya Narasimha Raju
- Department of Computer Science and Engineering (Data Science), Institute of Aeronautical Engineering, Dundigul, Hyderabad, Telangana, 500043, India.
| | - K Venkatesh
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai, Tamilnadu, 603203, India
| | - B Padmaja
- Department of Computer Science and Engineering-AI&ML, Institute of Aeronautical Engineering, Dundigal, Hyderabad, 500043, India
| | - C H N Santhosh Kumar
- Department of Computer Science and Engineering, Anurag Engineering College, Kodada, Telangana, 508206, India
| | | | - Ayodele Lasisi
- Department of Computer Science, College of Computer Science, King Khalid University, Abha, Saudi Arabia
| | - Saiful Islam
- Civil Engineering Department, College of Engineering, King Khalid University, 61421, Abha, Saudi Arabia
| | - Abdul Razak
- Department of Mechanical Engineering, P. A. College of Engineering (Affiliated to Visvesvaraya Technological UniversityBelagavi), Mangaluru, India
| | - Wahaj Ahmad Khan
- School of Civil Engineering & Architecture, Institute of Technology, Dire-Dawa University, 1362, Dire Dawa, Ethiopia.
| |
Collapse
|
5
|
ELKarazle K, Raman V, Chua C, Then P. A Hessian-Based Technique for Specular Reflection Detection and Inpainting in Colonoscopy Images. IEEE J Biomed Health Inform 2024; 28:4724-4736. [PMID: 38787660 DOI: 10.1109/jbhi.2024.3404955] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/26/2024]
Abstract
In the field of Computer-Aided Detection (CADx), the use of AI-based algorithms for disease detection in endoscopy images, especially colonoscopy images, is on the rise. However, these algorithms often encounter performance issues due to obstructions like specular reflection, resulting in false positives. This paper presents a novel algorithm specifically designed to tackle the challenges posed by high specular reflection regions in colonoscopy images. The proposed algorithm identifies these regions and applies precise inpainting for restoration. The process entails converting the input image from RGB to HSV color space and focusing on the Saturation (S) component in convex regions detected using a Hessian-based method. This step creates a binary mask that pinpoints areas of specular reflection. The inpainting function then uses this mask to guide the restoration of these identified regions and their borders. To ensure a seamless blend of the restored regions with the background and adjacent pixels, a feathering process is applied to the repaired regions. This enhances both the accuracy and aesthetic coherence of the inpainted images. The performance of our algorithm was rigorously tested on five unique colonoscopy datasets and various endoscopy images from the Kvasir dataset, using an extensive set of evaluation metrics and a comparative analysis with existing methods consistently highlighted the superior performance of our algorithm.
Collapse
|
6
|
Davila-Piñón P, Nogueira-Rodríguez A, Díez-Martín AI, Codesido L, Herrero J, Puga M, Rivas L, Sánchez E, Fdez-Riverola F, Glez-Peña D, Reboiro-Jato M, López-Fernández H, Cubiella J. Optical diagnosis in still images of colorectal polyps: comparison between expert endoscopists and PolyDeep, a Computer-Aided Diagnosis system. Front Oncol 2024; 14:1393815. [PMID: 38846970 PMCID: PMC11153726 DOI: 10.3389/fonc.2024.1393815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Accepted: 04/22/2024] [Indexed: 06/09/2024] Open
Abstract
Background PolyDeep is a computer-aided detection and classification (CADe/x) system trained to detect and classify polyps. During colonoscopy, CADe/x systems help endoscopists to predict the histology of colonic lesions. Objective To compare the diagnostic performance of PolyDeep and expert endoscopists for the optical diagnosis of colorectal polyps on still images. Methods PolyDeep Image Classification (PIC) is an in vitro diagnostic test study. The PIC database contains NBI images of 491 colorectal polyps with histological diagnosis. We evaluated the diagnostic performance of PolyDeep and four expert endoscopists for neoplasia (adenoma, sessile serrated lesion, traditional serrated adenoma) and adenoma characterization and compared them with the McNemar test. Receiver operating characteristic curves were constructed to assess the overall discriminatory ability, comparing the area under the curve of endoscopists and PolyDeep with the chi- square homogeneity areas test. Results The diagnostic performance of the endoscopists and PolyDeep in the characterization of neoplasia is similar in terms of sensitivity (PolyDeep: 89.05%; E1: 91.23%, p=0.5; E2: 96.11%, p<0.001; E3: 86.65%, p=0.3; E4: 91.26% p=0.3) and specificity (PolyDeep: 35.53%; E1: 33.80%, p=0.8; E2: 34.72%, p=1; E3: 39.24%, p=0.8; E4: 46.84%, p=0.2). The overall discriminative ability also showed no statistically significant differences (PolyDeep: 0.623; E1: 0.625, p=0.8; E2: 0.654, p=0.2; E3: 0.629, p=0.9; E4: 0.690, p=0.09). In the optical diagnosis of adenomatous polyps, we found that PolyDeep had a significantly higher sensitivity and a significantly lower specificity. The overall discriminative ability of adenomatous lesions by expert endoscopists is significantly higher than PolyDeep (PolyDeep: 0.582; E1: 0.685, p < 0.001; E2: 0.677, p < 0.0001; E3: 0.658, p < 0.01; E4: 0.694, p < 0.0001). Conclusion PolyDeep and endoscopists have similar diagnostic performance in the optical diagnosis of neoplastic lesions. However, endoscopists have a better global discriminatory ability than PolyDeep in the optical diagnosis of adenomatous polyps.
Collapse
Affiliation(s)
- Pedro Davila-Piñón
- Research Group in Gastrointestinal Oncology Ourense, Hospital Universitario de Ourense, Ourense, Spain
- Fundación Pública Galega de Investigación Biomédica Galicia Sur, Complexo Hospitalario Universitario de Ourense, Sergas, Ourense, Spain
| | - Alba Nogueira-Rodríguez
- Department of Computer Science, Escuela Superior de Ingenieria Informática (ESEI), CINBIO, University of Vigo, Ourense, Spain
- Next Generation Computer Systems Group (SING) Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), Ourense, Spain
| | - Astrid Irene Díez-Martín
- Research Group in Gastrointestinal Oncology Ourense, Hospital Universitario de Ourense, Ourense, Spain
- Fundación Pública Galega de Investigación Biomédica Galicia Sur, Complexo Hospitalario Universitario de Ourense, Sergas, Ourense, Spain
| | - Laura Codesido
- Research Group in Gastrointestinal Oncology Ourense, Hospital Universitario de Ourense, Ourense, Spain
- Fundación Pública Galega de Investigación Biomédica Galicia Sur, Complexo Hospitalario Universitario de Ourense, Sergas, Ourense, Spain
| | - Jesús Herrero
- Research Group in Gastrointestinal Oncology Ourense, Hospital Universitario de Ourense, Ourense, Spain
- Department of Gastroenterology, Hospital Universitario de Ourense, Ourense, Spain
- Department of Gastroenterology, Hospital Universitario de Ourense, Centro de Investigación Biomédica en Red de Enfermedades Hepáticas y Digestivas (CIBEREHD), Ourense, Spain
| | - Manuel Puga
- Research Group in Gastrointestinal Oncology Ourense, Hospital Universitario de Ourense, Ourense, Spain
- Department of Gastroenterology, Hospital Universitario de Ourense, Ourense, Spain
- Department of Gastroenterology, Hospital Universitario de Ourense, Centro de Investigación Biomédica en Red de Enfermedades Hepáticas y Digestivas (CIBEREHD), Ourense, Spain
| | - Laura Rivas
- Research Group in Gastrointestinal Oncology Ourense, Hospital Universitario de Ourense, Ourense, Spain
- Department of Gastroenterology, Hospital Universitario de Ourense, Ourense, Spain
- Department of Gastroenterology, Hospital Universitario de Ourense, Centro de Investigación Biomédica en Red de Enfermedades Hepáticas y Digestivas (CIBEREHD), Ourense, Spain
| | - Eloy Sánchez
- Research Group in Gastrointestinal Oncology Ourense, Hospital Universitario de Ourense, Ourense, Spain
- Department of Gastroenterology, Hospital Universitario de Ourense, Ourense, Spain
- Department of Gastroenterology, Hospital Universitario de Ourense, Centro de Investigación Biomédica en Red de Enfermedades Hepáticas y Digestivas (CIBEREHD), Ourense, Spain
| | - Florentino Fdez-Riverola
- Department of Computer Science, Escuela Superior de Ingenieria Informática (ESEI), CINBIO, University of Vigo, Ourense, Spain
- Next Generation Computer Systems Group (SING) Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), Ourense, Spain
| | - Daniel Glez-Peña
- Department of Computer Science, Escuela Superior de Ingenieria Informática (ESEI), CINBIO, University of Vigo, Ourense, Spain
- Next Generation Computer Systems Group (SING) Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), Ourense, Spain
| | - Miguel Reboiro-Jato
- Department of Computer Science, Escuela Superior de Ingenieria Informática (ESEI), CINBIO, University of Vigo, Ourense, Spain
- Next Generation Computer Systems Group (SING) Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), Ourense, Spain
| | - Hugo López-Fernández
- Department of Computer Science, Escuela Superior de Ingenieria Informática (ESEI), CINBIO, University of Vigo, Ourense, Spain
- Next Generation Computer Systems Group (SING) Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), Ourense, Spain
| | - Joaquín Cubiella
- Research Group in Gastrointestinal Oncology Ourense, Hospital Universitario de Ourense, Ourense, Spain
- Department of Gastroenterology, Hospital Universitario de Ourense, Ourense, Spain
- Department of Gastroenterology, Hospital Universitario de Ourense, Centro de Investigación Biomédica en Red de Enfermedades Hepáticas y Digestivas (CIBEREHD), Ourense, Spain
| |
Collapse
|
7
|
Zhu S, Gao J, Liu L, Yin M, Lin J, Xu C, Xu C, Zhu J. Public Imaging Datasets of Gastrointestinal Endoscopy for Artificial Intelligence: a Review. J Digit Imaging 2023; 36:2578-2601. [PMID: 37735308 PMCID: PMC10584770 DOI: 10.1007/s10278-023-00844-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 05/03/2023] [Accepted: 05/03/2023] [Indexed: 09/23/2023] Open
Abstract
With the advances in endoscopic technologies and artificial intelligence, a large number of endoscopic imaging datasets have been made public to researchers around the world. This study aims to review and introduce these datasets. An extensive literature search was conducted to identify appropriate datasets in PubMed, and other targeted searches were conducted in GitHub, Kaggle, and Simula to identify datasets directly. We provided a brief introduction to each dataset and evaluated the characteristics of the datasets included. Moreover, two national datasets in progress were discussed. A total of 40 datasets of endoscopic images were included, of which 34 were accessible for use. Basic and detailed information on each dataset was reported. Of all the datasets, 16 focus on polyps, and 6 focus on small bowel lesions. Most datasets (n = 16) were constructed by colonoscopy only, followed by normal gastrointestinal endoscopy and capsule endoscopy (n = 9). This review may facilitate the usage of public dataset resources in endoscopic research.
Collapse
Affiliation(s)
- Shiqi Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Jingwen Gao
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Lu Liu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Minyue Yin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Jiaxi Lin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Chang Xu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Chunfang Xu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China.
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China.
| | - Jinzhou Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China.
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China.
| |
Collapse
|
8
|
Wang K, Zhuang S, Miao J, Chen Y, Hua J, Zhou GQ, He X, Li S. Adaptive Frequency Learning Network With Anti-Aliasing Complex Convolutions for Colon Diseases Subtypes. IEEE J Biomed Health Inform 2023; 27:4816-4827. [PMID: 37796719 DOI: 10.1109/jbhi.2023.3300288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/07/2023]
Abstract
The automatic and dependable identification of colonic disease subtypes by colonoscopy is crucial. Once successful, it will facilitate clinically more in-depth disease staging analysis and the formulation of more tailored treatment plans. However, inter-class confusion and brightness imbalance are major obstacles to colon disease subtyping. Notably, the Fourier-based image spectrum, with its distinctive frequency features and brightness insensitivity, offers a potential solution. To effectively leverage its advantages to address the existing challenges, this article proposes a framework capable of thorough learning in the frequency domain based on four core designs: the position consistency module, the high-frequency self-supervised module, the complex number arithmetic model, and the feature anti-aliasing module. The position consistency module enables the generation of spectra that preserve local and positional information while compressing the spectral data range to improve training stability. Through band masking and supervision, the high-frequency autoencoder module guides the network to learn useful frequency features selectively. The proposed complex number arithmetic model allows direct spectral training while avoiding the loss of phase information caused by current general-purpose real-valued operations. The feature anti-aliasing module embeds filters in the model to prevent spectral aliasing caused by down-sampling and improve performance. Experiments are performed on the collected five-class dataset, which contains 4591 colorectal endoscopic images. The outcomes show that our proposed method produces state-of-the-art results with an accuracy rate of 89.82%.
Collapse
|
9
|
Lazo JF, Rosa B, Catellani M, Fontana M, Mistretta FA, Musi G, de Cobelli O, de Mathelin M, De Momi E. Semi-Supervised Bladder Tissue Classification in Multi-Domain Endoscopic Images. IEEE Trans Biomed Eng 2023; 70:2822-2833. [PMID: 37037233 DOI: 10.1109/tbme.2023.3265679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/12/2023]
Abstract
OBJECTIVE Accurate visual classification of bladder tissue during Trans-Urethral Resection of Bladder Tumor (TURBT) procedures is essential to improve early cancer diagnosis and treatment. During TURBT interventions, White Light Imaging (WLI) and Narrow Band Imaging (NBI) techniques are used for lesion detection. Each imaging technique provides diverse visual information that allows clinicians to identify and classify cancerous lesions. Computer vision methods that use both imaging techniques could improve endoscopic diagnosis. We address the challenge of tissue classification when annotations are available only in one domain, in our case WLI, and the endoscopic images correspond to an unpaired dataset, i.e. there is no exact equivalent for every image in both NBI and WLI domains. METHOD We propose a semi-surprised Generative Adversarial Network (GAN)-based method composed of three main components: a teacher network trained on the labeled WLI data; a cycle-consistency GAN to perform unpaired image-to-image translation, and a multi-input student network. To ensure the quality of the synthetic images generated by the proposed GAN we perform a detailed quantitative, and qualitative analysis with the help of specialists. CONCLUSION The overall average classification accuracy, precision, and recall obtained with the proposed method for tissue classification are 0.90, 0.88, and 0.89 respectively, while the same metrics obtained in the unlabeled domain (NBI) are 0.92, 0.64, and 0.94 respectively. The quality of the generated images is reliable enough to deceive specialists. SIGNIFICANCE This study shows the potential of using semi-supervised GAN-based bladder tissue classification when annotations are limited in multi-domain data.
Collapse
|
10
|
Raju ASN, Venkatesh K. EnsemDeepCADx: Empowering Colorectal Cancer Diagnosis with Mixed-Dataset Features and Ensemble Fusion CNNs on Evidence-Based CKHK-22 Dataset. Bioengineering (Basel) 2023; 10:738. [PMID: 37370669 PMCID: PMC10295325 DOI: 10.3390/bioengineering10060738] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 06/16/2023] [Accepted: 06/18/2023] [Indexed: 06/29/2023] Open
Abstract
Colorectal cancer is associated with a high mortality rate and significant patient risk. Images obtained during a colonoscopy are used to make a diagnosis, highlighting the importance of timely diagnosis and treatment. Using techniques of deep learning could enhance the diagnostic accuracy of existing systems. Using the most advanced deep learning techniques, a brand-new EnsemDeepCADx system for accurate colorectal cancer diagnosis has been developed. The optimal accuracy is achieved by combining Convolutional Neural Networks (CNNs) with transfer learning via bidirectional long short-term memory (BILSTM) and support vector machines (SVM). Four pre-trained CNN models comprise the ADaDR-22, ADaR-22, and DaRD-22 ensemble CNNs: AlexNet, DarkNet-19, DenseNet-201, and ResNet-50. In each of its stages, the CADx system is thoroughly evaluated. From the CKHK-22 mixed dataset, colour, greyscale, and local binary pattern (LBP) image datasets and features are utilised. In the second stage, the returned features are compared to a new feature fusion dataset using three distinct CNN ensembles. Next, they incorporate ensemble CNNs with SVM-based transfer learning by comparing raw features to feature fusion datasets. In the final stage of transfer learning, BILSTM and SVM are combined with a CNN ensemble. The testing accuracy for the ensemble fusion CNN DarD-22 using BILSTM and SVM on the original, grey, LBP, and feature fusion datasets was optimal (95.96%, 88.79%, 73.54%, and 97.89%). Comparing the outputs of all four feature datasets with those of the three ensemble CNNs at each stage enables the EnsemDeepCADx system to attain its highest level of accuracy.
Collapse
Affiliation(s)
- Akella Subrahmanya Narasimha Raju
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, SRM Nagar, Chennai 603203, India;
| | | |
Collapse
|
11
|
Wang KN, Zhuang S, Ran QY, Zhou P, Hua J, Zhou GQ, He X. DLGNet: A dual-branch lesion-aware network with the supervised Gaussian Mixture model for colon lesions classification in colonoscopy images. Med Image Anal 2023; 87:102832. [PMID: 37148864 DOI: 10.1016/j.media.2023.102832] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2022] [Revised: 01/20/2023] [Accepted: 04/20/2023] [Indexed: 05/08/2023]
Abstract
Colorectal cancer is one of the malignant tumors with the highest mortality due to the lack of obvious early symptoms. It is usually in the advanced stage when it is discovered. Thus the automatic and accurate classification of early colon lesions is of great significance for clinically estimating the status of colon lesions and formulating appropriate diagnostic programs. However, it is challenging to classify full-stage colon lesions due to the large inter-class similarities and intra-class differences of the images. In this work, we propose a novel dual-branch lesion-aware neural network (DLGNet) to classify intestinal lesions by exploring the intrinsic relationship between diseases, composed of four modules: lesion location module, dual-branch classification module, attention guidance module, and inter-class Gaussian loss function. Specifically, the elaborate dual-branch module integrates the original image and the lesion patch obtained by the lesion localization module to explore and interact with lesion-specific features from a global and local perspective. Also, the feature-guided module guides the model to pay attention to the disease-specific features by learning remote dependencies through spatial and channel attention after network feature learning. Finally, the inter-class Gaussian loss function is proposed, which assumes that each feature extracted by the network is an independent Gaussian distribution, and the inter-class clustering is more compact, thereby improving the discriminative ability of the network. The extensive experiments on the collected 2568 colonoscopy images have an average accuracy of 91.50%, and the proposed method surpasses the state-of-the-art methods. This study is the first time that colon lesions are classified at each stage and achieves promising colon disease classification performance. To motivate the community, we have made our code publicly available via https://github.com/soleilssss/DLGNet.
Collapse
Affiliation(s)
- Kai-Ni Wang
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; State Key Laboratory of Digital Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China
| | - Shuaishuai Zhuang
- The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Qi-Yong Ran
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; State Key Laboratory of Digital Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China
| | - Ping Zhou
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; State Key Laboratory of Digital Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China
| | - Jie Hua
- The First Affiliated Hospital of Nanjing Medical University, Nanjing, China; Liyang People's Hospital, Liyang Branch Hospital of Jiangsu Province Hospital, Liyang, China
| | - Guang-Quan Zhou
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; State Key Laboratory of Digital Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China.
| | - Xiaopu He
- The First Affiliated Hospital of Nanjing Medical University, Nanjing, China.
| |
Collapse
|
12
|
Development and deployment of Computer-aided Real-Time feedback for improving quality of colonoscopy in a Multi-Center clinical trial. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
|
13
|
Shen MH, Huang CC, Chen YT, Tsai YJ, Liou FM, Chang SC, Phan NN. Deep Learning Empowers Endoscopic Detection and Polyps Classification: A Multiple-Hospital Study. Diagnostics (Basel) 2023; 13:diagnostics13081473. [PMID: 37189575 DOI: 10.3390/diagnostics13081473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 04/03/2023] [Accepted: 04/12/2023] [Indexed: 05/17/2023] Open
Abstract
The present study aimed to develop an AI-based system for the detection and classification of polyps using colonoscopy images. A total of about 256,220 colonoscopy images from 5000 colorectal cancer patients were collected and processed. We used the CNN model for polyp detection and the EfficientNet-b0 model for polyp classification. Data were partitioned into training, validation and testing sets, with a 70%, 15% and 15% ratio, respectively. After the model was trained/validated/tested, to evaluate its performance rigorously, we conducted a further external validation using both prospective (n = 150) and retrospective (n = 385) approaches for data collection from 3 hospitals. The deep learning model performance with the testing set reached a state-of-the-art sensitivity and specificity of 0.9709 (95% CI: 0.9646-0.9757) and 0.9701 (95% CI: 0.9663-0.9749), respectively, for polyp detection. The polyp classification model attained an AUC of 0.9989 (95% CI: 0.9954-1.00). The external validation from 3 hospital results achieved 0.9516 (95% CI: 0.9295-0.9670) with the lesion-based sensitivity and a frame-based specificity of 0.9720 (95% CI: 0.9713-0.9726) for polyp detection. The model achieved an AUC of 0.9521 (95% CI: 0.9308-0.9734) for polyp classification. The high-performance, deep-learning-based system could be used in clinical practice to facilitate rapid, efficient and reliable decisions by physicians and endoscopists.
Collapse
Affiliation(s)
- Ming-Hung Shen
- Department of Surgery, Fu Jen Catholic University Hospital, Fu Jen Catholic University, New Taipei City 24205, Taiwan
- School of Medicine, College of Medicine, Fu Jen Catholic University, New Taipei City 24205, Taiwan
| | - Chi-Cheng Huang
- Department of Surgery, Taipei Veterans General Hospital, Taipei City 11217, Taiwan
- Institute of Epidemiology and Preventive Medicine, College of Public Health, National Taiwan University, Taipei City 10663, Taiwan
| | - Yu-Tsung Chen
- Department of Internal Medicine, Fu Jen Catholic University Hospital, New Taipei City 24205, Taiwan
| | - Yi-Jian Tsai
- Division of Colorectal Surgery, Department of Surgery, Fu Jen Catholic University Hospital, New Taipei City 24205, Taiwan
- Graduate Institute of Biomedical Electronics and Bioinformatics, Department of Electrical Engineering, National Taiwan University, Taipei City 10663, Taiwan
| | | | - Shih-Chang Chang
- Division of Colorectal Surgery, Department of Surgery, Cathay General Hospital, Taipei City 106443, Taiwan
| | - Nam Nhut Phan
- Bioinformatics and Biostatistics Core, Centre of Genomic and Precision Medicine, National Taiwan University, Taipei City 10055, Taiwan
| |
Collapse
|
14
|
Iqbal S, N. Qureshi A, Li J, Mahmood T. On the Analyses of Medical Images Using Traditional Machine Learning Techniques and Convolutional Neural Networks. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2023; 30:3173-3233. [PMID: 37260910 PMCID: PMC10071480 DOI: 10.1007/s11831-023-09899-9] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 02/19/2023] [Indexed: 06/02/2023]
Abstract
Convolutional neural network (CNN) has shown dissuasive accomplishment on different areas especially Object Detection, Segmentation, Reconstruction (2D and 3D), Information Retrieval, Medical Image Registration, Multi-lingual translation, Local language Processing, Anomaly Detection on video and Speech Recognition. CNN is a special type of Neural Network, which has compelling and effective learning ability to learn features at several steps during augmentation of the data. Recently, different interesting and inspiring ideas of Deep Learning (DL) such as different activation functions, hyperparameter optimization, regularization, momentum and loss functions has improved the performance, operation and execution of CNN Different internal architecture innovation of CNN and different representational style of CNN has significantly improved the performance. This survey focuses on internal taxonomy of deep learning, different models of vonvolutional neural network, especially depth and width of models and in addition CNN components, applications and current challenges of deep learning.
Collapse
Affiliation(s)
- Saeed Iqbal
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Adnan N. Qureshi
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
- Beijing Engineering Research Center for IoT Software and Systems, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Tariq Mahmood
- Artificial Intelligence and Data Analytics (AIDA) Lab, College of Computer & Information Sciences (CCIS), Prince Sultan University, Riyadh, 11586 Kingdom of Saudi Arabia
| |
Collapse
|
15
|
Yeung M, Rundo L, Nan Y, Sala E, Schönlieb CB, Yang G. Calibrating the Dice Loss to Handle Neural Network Overconfidence for Biomedical Image Segmentation. J Digit Imaging 2023; 36:739-752. [PMID: 36474089 PMCID: PMC10039156 DOI: 10.1007/s10278-022-00735-3] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Revised: 10/30/2022] [Accepted: 10/31/2022] [Indexed: 12/12/2022] Open
Abstract
The Dice similarity coefficient (DSC) is both a widely used metric and loss function for biomedical image segmentation due to its robustness to class imbalance. However, it is well known that the DSC loss is poorly calibrated, resulting in overconfident predictions that cannot be usefully interpreted in biomedical and clinical practice. Performance is often the only metric used to evaluate segmentations produced by deep neural networks, and calibration is often neglected. However, calibration is important for translation into biomedical and clinical practice, providing crucial contextual information to model predictions for interpretation by scientists and clinicians. In this study, we provide a simple yet effective extension of the DSC loss, named the DSC++ loss, that selectively modulates the penalty associated with overconfident, incorrect predictions. As a standalone loss function, the DSC++ loss achieves significantly improved calibration over the conventional DSC loss across six well-validated open-source biomedical imaging datasets, including both 2D binary and 3D multi-class segmentation tasks. Similarly, we observe significantly improved calibration when integrating the DSC++ loss into four DSC-based loss functions. Finally, we use softmax thresholding to illustrate that well calibrated outputs enable tailoring of recall-precision bias, which is an important post-processing technique to adapt the model predictions to suit the biomedical or clinical task. The DSC++ loss overcomes the major limitation of the DSC loss, providing a suitable loss function for training deep learning segmentation models for use in biomedical and clinical practice. Source code is available at https://github.com/mlyg/DicePlusPlus .
Collapse
Affiliation(s)
- Michael Yeung
- Department of Radiology, University of Cambridge, Hills Rd, Cambridge, CB2 0QQ UK
- National Heart & Lung Institute, Imperial College London, Dovehouse St, London, SW3 6LY UK
- Department of Computing, Imperial College London, London, UK
| | - Leonardo Rundo
- Department of Radiology, University of Cambridge, Hills Rd, Cambridge, CB2 0QQ UK
- Cancer Research UK Cambridge Centre, University of Cambridge, Robinson Way, Cambridge, CB2 0RE UK
- Department of Information and Electrical Engineering and Applied Mathematics (DIEM), University of Salerno, Fisciano, Salerno 84084 Italy
| | - Yang Nan
- National Heart & Lung Institute, Imperial College London, Dovehouse St, London, SW3 6LY UK
| | - Evis Sala
- Department of Radiology, University of Cambridge, Hills Rd, Cambridge, CB2 0QQ UK
- Cancer Research UK Cambridge Centre, University of Cambridge, Robinson Way, Cambridge, CB2 0RE UK
| | - Carola-Bibiane Schönlieb
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Rd, Cambridge, CB3 0WA UK
| | - Guang Yang
- National Heart & Lung Institute, Imperial College London, Dovehouse St, London, SW3 6LY UK
| |
Collapse
|
16
|
Cherubini A, Dinh NN. A Review of the Technology, Training, and Assessment Methods for the First Real-Time AI-Enhanced Medical Device for Endoscopy. Bioengineering (Basel) 2023; 10:404. [PMID: 37106592 PMCID: PMC10136070 DOI: 10.3390/bioengineering10040404] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 02/25/2023] [Accepted: 03/22/2023] [Indexed: 04/29/2023] Open
Abstract
Artificial intelligence (AI) has the potential to assist in endoscopy and improve decision making, particularly in situations where humans may make inconsistent judgments. The performance assessment of the medical devices operating in this context is a complex combination of bench tests, randomized controlled trials, and studies on the interaction between physicians and AI. We review the scientific evidence published about GI Genius, the first AI-powered medical device for colonoscopy to enter the market, and the device that is most widely tested by the scientific community. We provide an overview of its technical architecture, AI training and testing strategies, and regulatory path. In addition, we discuss the strengths and limitations of the current platform and its potential impact on clinical practice. The details of the algorithm architecture and the data that were used to train the AI device have been disclosed to the scientific community in the pursuit of a transparent AI. Overall, the first AI-enabled medical device for real-time video analysis represents a significant advancement in the use of AI for endoscopies and has the potential to improve the accuracy and efficiency of colonoscopy procedures.
Collapse
Affiliation(s)
- Andrea Cherubini
- Cosmo Intelligent Medical Devices, D02KV60 Dublin, Ireland
- Milan Center for Neuroscience, University of Milano–Bicocca, 20126 Milano, Italy
| | - Nhan Ngo Dinh
- Cosmo Intelligent Medical Devices, D02KV60 Dublin, Ireland
| |
Collapse
|
17
|
Nogueira-Rodríguez A, Glez-Peña D, Reboiro-Jato M, López-Fernández H. Negative Samples for Improving Object Detection-A Case Study in AI-Assisted Colonoscopy for Polyp Detection. Diagnostics (Basel) 2023; 13:diagnostics13050966. [PMID: 36900110 PMCID: PMC10001273 DOI: 10.3390/diagnostics13050966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 03/01/2023] [Indexed: 03/08/2023] Open
Abstract
Deep learning object-detection models are being successfully applied to develop computer-aided diagnosis systems for aiding polyp detection during colonoscopies. Here, we evidence the need to include negative samples for both (i) reducing false positives during the polyp-finding phase, by including images with artifacts that may confuse the detection models (e.g., medical instruments, water jets, feces, blood, excessive proximity of the camera to the colon wall, blurred images, etc.) that are usually not included in model development datasets, and (ii) correctly estimating a more realistic performance of the models. By retraining our previously developed YOLOv3-based detection model with a dataset that includes 15% of additional not-polyp images with a variety of artifacts, we were able to generally improve its F1 performance in our internal test datasets (from an average F1 of 0.869 to 0.893), which now include such type of images, as well as in four public datasets that include not-polyp images (from an average F1 of 0.695 to 0.722).
Collapse
Affiliation(s)
- Alba Nogueira-Rodríguez
- CINBIO, Department of Computer Science, ESEI—Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
- Correspondence:
| | - Daniel Glez-Peña
- CINBIO, Department of Computer Science, ESEI—Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
| | - Miguel Reboiro-Jato
- CINBIO, Department of Computer Science, ESEI—Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
| | - Hugo López-Fernández
- CINBIO, Department of Computer Science, ESEI—Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
| |
Collapse
|
18
|
Chadebecq F, Lovat LB, Stoyanov D. Artificial intelligence and automation in endoscopy and surgery. Nat Rev Gastroenterol Hepatol 2023; 20:171-182. [PMID: 36352158 DOI: 10.1038/s41575-022-00701-y] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 10/03/2022] [Indexed: 11/10/2022]
Abstract
Modern endoscopy relies on digital technology, from high-resolution imaging sensors and displays to electronics connecting configurable illumination and actuation systems for robotic articulation. In addition to enabling more effective diagnostic and therapeutic interventions, the digitization of the procedural toolset enables video data capture of the internal human anatomy at unprecedented levels. Interventional video data encapsulate functional and structural information about a patient's anatomy as well as events, activity and action logs about the surgical process. This detailed but difficult-to-interpret record from endoscopic procedures can be linked to preoperative and postoperative records or patient imaging information. Rapid advances in artificial intelligence, especially in supervised deep learning, can utilize data from endoscopic procedures to develop systems for assisting procedures leading to computer-assisted interventions that can enable better navigation during procedures, automation of image interpretation and robotically assisted tool manipulation. In this Perspective, we summarize state-of-the-art artificial intelligence for computer-assisted interventions in gastroenterology and surgery.
Collapse
Affiliation(s)
- François Chadebecq
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Laurence B Lovat
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.
| |
Collapse
|
19
|
Mansur A, Saleem Z, Elhakim T, Daye D. Role of artificial intelligence in risk prediction, prognostication, and therapy response assessment in colorectal cancer: current state and future directions. Front Oncol 2023; 13:1065402. [PMID: 36761957 PMCID: PMC9905815 DOI: 10.3389/fonc.2023.1065402] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Accepted: 01/09/2023] [Indexed: 01/26/2023] Open
Abstract
Artificial Intelligence (AI) is a branch of computer science that utilizes optimization, probabilistic and statistical approaches to analyze and make predictions based on a vast amount of data. In recent years, AI has revolutionized the field of oncology and spearheaded novel approaches in the management of various cancers, including colorectal cancer (CRC). Notably, the applications of AI to diagnose, prognosticate, and predict response to therapy in CRC, is gaining traction and proving to be promising. There have also been several advancements in AI technologies to help predict metastases in CRC and in Computer-Aided Detection (CAD) Systems to improve miss rates for colorectal neoplasia. This article provides a comprehensive review of the role of AI in predicting risk, prognosis, and response to therapies among patients with CRC.
Collapse
Affiliation(s)
- Arian Mansur
- Harvard Medical School, Boston, MA, United States
| | | | - Tarig Elhakim
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Dania Daye
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| |
Collapse
|
20
|
Semantic segmentation in medical images through transfused convolution and transformer networks. APPL INTELL 2023; 53:1132-1148. [PMID: 35498554 PMCID: PMC9035506 DOI: 10.1007/s10489-022-03642-w] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/15/2022] [Indexed: 01/06/2023]
Abstract
Recent decades have witnessed rapid development in the field of medical image segmentation. Deep learning-based fully convolution neural networks have played a significant role in the development of automated medical image segmentation models. Though immensely effective, such networks only take into account localized features and are unable to capitalize on the global context of medical image. In this paper, two deep learning based models have been proposed namely USegTransformer-P and USegTransformer-S. The proposed models capitalize upon local features and global features by amalgamating the transformer-based encoders and convolution-based encoders to segment medical images with high precision. Both the proposed models deliver promising results, performing better than the previous state of the art models in various segmentation tasks such as Brain tumor, Lung nodules, Skin lesion and Nuclei segmentation. The authors believe that the ability of USegTransformer-P and USegTransformer-S to perform segmentation with high precision could remarkably benefit medical practitioners and radiologists around the world.
Collapse
|
21
|
Nisha JS, Gopi VARUNPALAKUZHIYIL. Colorectal polyp detection in colonoscopy videos using image enhancement and discrete orthonormal stockwell transform. SĀDHANĀ 2022; 47:234. [DOI: 10.1007/s12046-022-01970-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 08/01/2022] [Accepted: 08/09/2022] [Indexed: 04/01/2025]
|
22
|
Nisha JS, Gopi VP, Palanisamy P. COLORECTAL POLYP DETECTION USING IMAGE ENHANCEMENT AND SCALED YOLOv4 ALGORITHM. BIOMEDICAL ENGINEERING: APPLICATIONS, BASIS AND COMMUNICATIONS 2022; 34. [DOI: 10.4015/s1016237222500260] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/01/2025]
Abstract
Colorectal cancer (CRC) is the common cancer-related cause of death globally. It is now the third leading cause of cancer-related mortality worldwide. As the number of instances of colorectal polyps rises, it is more important than ever to identify and diagnose them early. Object detection models have recently become popular for extracting highly representative features. Colonoscopy is shown to be a useful diagnostic procedure for examining anomalies in the digestive system’s bottom half. This research presents a novel image-enhancing approach followed by a Scaled YOLOv4 Network for the early diagnosis of polyps, lowering the high risk of CRC therapy. The proposed network is trained using the CVC ClinicDB and the CVC ColonDB and the Etis Larib database are used for testing. On the CVC ColonDB database, the performance metrics are precision (95.13%), recall (74.92%), F1-score (83.19%), and F2-score (89.89%). On the ETIS Larib database, the performance metrics are precision (94.30%), recall (77.30%), F1-score (84.90%), and F2-score (80.20%). On both the databases, the suggested methodology outperforms the present one in terms of F1-score, F2-score, and precision compared to the futuristic method. The proposed Yolo object identification model provides an accurate polyp detection strategy in a real-time application.
Collapse
Affiliation(s)
- J. S. Nisha
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamil Nadu 620015, India
| | - Varun P. Gopi
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamil Nadu 620015, India
| | - P. Palanisamy
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamil Nadu 620015, India
| |
Collapse
|
23
|
Ramzan M, Raza M, Sharif MI, Kadry S. Gastrointestinal Tract Polyp Anomaly Segmentation on Colonoscopy Images Using Graft-U-Net. J Pers Med 2022; 12:jpm12091459. [PMID: 36143244 PMCID: PMC9503374 DOI: 10.3390/jpm12091459] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 08/28/2022] [Accepted: 09/01/2022] [Indexed: 11/21/2022] Open
Abstract
Computer-aided polyp segmentation is a crucial task that supports gastroenterologists in examining and resecting anomalous tissue in the gastrointestinal tract. The disease polyps grow mainly in the colorectal area of the gastrointestinal tract and in the mucous membrane, which has protrusions of micro-abnormal tissue that increase the risk of incurable diseases such as cancer. So, the early examination of polyps can decrease the chance of the polyps growing into cancer, such as adenomas, which can change into cancer. Deep learning-based diagnostic systems play a vital role in diagnosing diseases in the early stages. A deep learning method, Graft-U-Net, is proposed to segment polyps using colonoscopy frames. Graft-U-Net is a modified version of UNet, which comprises three stages, including the preprocessing, encoder, and decoder stages. The preprocessing technique is used to improve the contrast of the colonoscopy frames. Graft-U-Net comprises encoder and decoder blocks where the encoder analyzes features, while the decoder performs the features’ synthesizing processes. The Graft-U-Net model offers better segmentation results than existing deep learning models. The experiments were conducted using two open-access datasets, Kvasir-SEG and CVC-ClinicDB. The datasets were prepared from the large bowel of the gastrointestinal tract by performing a colonoscopy procedure. The anticipated model outperforms in terms of its mean Dice of 96.61% and mean Intersection over Union (mIoU) of 82.45% with the Kvasir-SEG dataset. Similarly, with the CVC-ClinicDB dataset, the method achieved a mean Dice of 89.95% and an mIoU of 81.38%.
Collapse
Affiliation(s)
- Muhammad Ramzan
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Islamabad 47040, Pakistan
| | - Mudassar Raza
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Islamabad 47040, Pakistan
- Correspondence:
| | - Muhammad Imran Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Islamabad 47040, Pakistan
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos 999095, Lebanon
| |
Collapse
|
24
|
Double-Balanced Loss for Imbalanced Colorectal Lesion Classification. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:1691075. [PMID: 35979050 PMCID: PMC9377973 DOI: 10.1155/2022/1691075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/23/2022] [Revised: 07/06/2022] [Accepted: 07/13/2022] [Indexed: 11/18/2022]
Abstract
Colorectal cancer has a high incidence rate in all countries around the world, and the survival rate of patients is improved by early detection. With the development of object detection technology based on deep learning, computer-aided diagnosis of colonoscopy medical images becomes a reality, which can effectively reduce the occurrence of missed diagnosis and misdiagnosis. In medical image recognition, the assumption that training samples follow independent identical distribution (IID) is the key to the high accuracy of deep learning. However, the classification of medical images is unbalanced in most cases. This paper proposes a new loss function named the double-balanced loss function for the deep learning model, to improve the impact of datasets on classification accuracy. It introduces the effects of sample size and sample difficulty to the loss calculation and deals with both sample size imbalance and sample difficulty imbalance. And it combines with deep learning to build the medical diagnosis model for colorectal cancer. Experimentally verified by three colorectal white-light endoscopic image datasets, the double-balanced loss function proposed in this paper has better performance on the imbalance classification problem of colorectal medical images.
Collapse
|
25
|
Surgical Tool Datasets for Machine Learning Research: A Survey. Int J Comput Vis 2022. [DOI: 10.1007/s11263-022-01640-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
AbstractThis paper is a comprehensive survey of datasets for surgical tool detection and related surgical data science and machine learning techniques and algorithms. The survey offers a high level perspective of current research in this area, analyses the taxonomy of approaches adopted by researchers using surgical tool datasets, and addresses key areas of research, such as the datasets used, evaluation metrics applied and deep learning techniques utilised. Our presentation and taxonomy provides a framework that facilitates greater understanding of current work, and highlights the challenges and opportunities for further innovative and useful research.
Collapse
|
26
|
Adjei PE, Lonseko ZM, Du W, Zhang H, Rao N. Examining the effect of synthetic data augmentation in polyp detection and segmentation. Int J Comput Assist Radiol Surg 2022; 17:1289-1302. [PMID: 35678960 DOI: 10.1007/s11548-022-02651-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Accepted: 04/21/2022] [Indexed: 12/17/2022]
Abstract
PURPOSE As with several medical image analysis tasks based on deep learning, gastrointestinal image analysis is plagued with data scarcity, privacy concerns and an insufficient number of pathology samples. This study examines the generation and utility of synthetic samples of colonoscopy images with polyps for data augmentation. METHODS We modify and train a pix2pix model to generate synthetic colonoscopy samples with polyps to augment the original dataset. Subsequently, we create a variety of datasets by varying the quantity of synthetic samples and traditional augmentation samples, to train a U-Net network and Faster R-CNN model for segmentation and detection of polyps, respectively. We compare the performance of the models when trained with the resulting datasets in terms of F1 score, intersection over union, precision and recall. Further, we compare the performances of the models with unseen polyp datasets to assess their generalization ability. RESULTS The average F1 coefficient and intersection over union are improved with increasing number of synthetic samples in U-Net over all test datasets. The performance of the Faster R-CNN model is also improved in terms of polyp detection, while decreasing the false-negative rate. Further, the experimental results for polyp detection outperform similar studies in the literature on the ETIS-PolypLaribDB dataset. CONCLUSION By varying the quantity of synthetic and traditional augmentation, there is the potential to control the sensitivity of deep learning models in polyp segmentation and detection. Further, GAN-based augmentation is a viable option for improving the performance of models for polyp segmentation and detection.
Collapse
Affiliation(s)
- Prince Ebenezer Adjei
- Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, 610054, China.,School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China.,Department of Computer Engineering, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana
| | - Zenebe Markos Lonseko
- Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, 610054, China.,School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Wenju Du
- Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, 610054, China.,School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Han Zhang
- Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, 610054, China.,School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Nini Rao
- Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, 610054, China. .,School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China.
| |
Collapse
|
27
|
Biffi C, Salvagnini P, Dinh NN, Hassan C, Sharma P, Cherubini A. A novel AI device for real-time optical characterization of colorectal polyps. NPJ Digit Med 2022; 5:84. [PMID: 35773468 PMCID: PMC9247164 DOI: 10.1038/s41746-022-00633-6] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 06/16/2022] [Indexed: 01/03/2023] Open
Abstract
Accurate in-vivo optical characterization of colorectal polyps is key to select the optimal treatment regimen during colonoscopy. However, reported accuracies vary widely among endoscopists. We developed a novel intelligent medical device able to seamlessly operate in real-time using conventional white light (WL) endoscopy video stream without virtual chromoendoscopy (blue light, BL). In this work, we evaluated the standalone performance of this computer-aided diagnosis device (CADx) on a prospectively acquired dataset of unaltered colonoscopy videos. An international group of endoscopists performed optical characterization of each polyp acquired in a prospective study, blinded to both histology and CADx result, by means of an online platform enabling careful video assessment. Colorectal polyps were categorized by reviewers, subdivided into 10 experts and 11 non-experts endoscopists, and by the CADx as either “adenoma” or “non-adenoma”. A total of 513 polyps from 165 patients were assessed. CADx accuracy in WL was found comparable to the accuracy of expert endoscopists (CADxWL/Exp; OR 1.211 [0.766–1.915]) using histopathology as the reference standard. Moreover, CADx accuracy in WL was found superior to the accuracy of non-expert endoscopists (CADxWL/NonExp; OR 1.875 [1.191–2.953]), and CADx accuracy in BL was found comparable to it (CADxBL/CADxWL; OR 0.886 [0.612–1.282]). The proposed intelligent device shows the potential to support non-expert endoscopists in systematically reaching the performances of expert endoscopists in optical characterization.
Collapse
Affiliation(s)
- Carlo Biffi
- Artificial Intelligence Group, Cosmo AI/Linkverse, Lainate/Rome, Italy
| | - Pietro Salvagnini
- Artificial Intelligence Group, Cosmo AI/Linkverse, Lainate/Rome, Italy
| | - Nhan Ngo Dinh
- Artificial Intelligence Group, Cosmo AI/Linkverse, Lainate/Rome, Italy
| | - Cesare Hassan
- Gastroenterology Unit, Nuovo Regina Margherita Hospital, Rome, Italy.,Endoscopy Unit, Humanitas Clinical and Research Center IRCCS, Rozzano, Italy
| | - Prateek Sharma
- VA Medical Center, Kansas City, MO, USA.,University of Kansas School of Medicine, Kansas City, MO, USA
| | | | - Andrea Cherubini
- Artificial Intelligence Group, Cosmo AI/Linkverse, Lainate/Rome, Italy. .,Milan Center for Neuroscience, University of Milano-Bicocca, 20126, Milano, Italy.
| |
Collapse
|
28
|
Awidi M, Bagga A. Artificial intelligence and machine learning in colorectal cancer. Artif Intell Gastrointest Endosc 2022; 3:31-43. [DOI: 10.37126/aige.v3.i3.31] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 03/24/2022] [Accepted: 06/20/2022] [Indexed: 02/06/2023] Open
|
29
|
Luca M, Ciobanu A. Polyp detection in video colonoscopy using deep learning. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-219276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Video colonoscopy automatic processing is a challenge and further development of computer assisted diagnosis is very helpful in correctness assessment of the exam, in e-learning and training, for statistics on polyps’ malignity or in polyps’ survey. New devices and programming languages are emerging and deep learning begun already to furnish astonishing results, in the quest for high speed and optimal polyp detection software. This paper presents a successful attempt in detecting the intestinal polyps in real time video colonoscopy with deep learning, using Mobile Net.
Collapse
Affiliation(s)
- Mihaela Luca
- Institute of Computer Science, Romanian Academy Iaşi Branch, Iaşi, Romania
| | - Adrian Ciobanu
- Institute of Computer Science, Romanian Academy Iaşi Branch, Iaşi, Romania
| |
Collapse
|
30
|
Yue G, Han W, Jiang B, Zhou T, Cong R, Wang T. Boundary Constraint Network with Cross Layer Feature Integration for Polyp Segmentation. IEEE J Biomed Health Inform 2022; 26:4090-4099. [PMID: 35536816 DOI: 10.1109/jbhi.2022.3173948] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Clinically, proper polyp localization in endoscopy images plays a vital role in the follow-up treatment (e.g., surgical planning). Deep convolutional neural networks (CNNs) provide a favoured prospect for automatic polyp segmentation and evade the limitations of visual inspection, e.g., subjectivity and overwork. However, most existing CNNs-based methods often provide unsatisfactory segmentation performance. In this paper, we propose a novel boundary constraint network, namely BCNet, for accurate polyp segmentation. The success of BCNet benefits from integrating cross-level context information and leveraging edge information. Specifically, to avoid the drawbacks caused by simple feature addition or concentration, BCNet applies a cross-layer feature integration strategy (CFIS) in fusing the features of the top-three highest layers, yielding a better performance. CFIS consists of three attention-driven cross-layer feature interaction modules (ACFIMs) and two global feature integration modules (GFIMs). ACFIM adaptively fuses the context information of the top-three highest layers via the self-attention mechanism instead of direct addition or concentration. GFIM integrates the fused information across layers with the guidance from global attention. To obtain accurate boundaries, BCNet introduces a bilateral boundary extraction module that explores the polyp and non-polyp information of the shallow layer collaboratively based on the high-level location information and boundary supervision. Through joint supervision of the polyp area and boundary, BCNet is able to get more accurate polyp masks. Experimental results on three public datasets show that the proposed BCNet outperforms seven state-of-the-art competing methods in terms of both effectiveness and generalization.
Collapse
|
31
|
A deep ensemble learning method for colorectal polyp classification with optimized network parameters. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03689-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
AbstractColorectal Cancer (CRC), a leading cause of cancer-related deaths, can be abated by timely polypectomy. Computer-aided classification of polyps helps endoscopists to resect timely without submitting the sample for histology. Deep learning-based algorithms are promoted for computer-aided colorectal polyp classification. However, the existing methods do not accommodate any information on hyperparametric settings essential for model optimisation. Furthermore, unlike the polyp types, i.e., hyperplastic and adenomatous, the third type, serrated adenoma, is difficult to classify due to its hybrid nature. Moreover, automated assessment of polyps is a challenging task due to the similarities in their patterns; therefore, the strength of individual weak learners is combined to form a weighted ensemble model for an accurate classification model by establishing the optimised hyperparameters. In contrast to existing studies on binary classification, multiclass classification require evaluation through advanced measures. This study compared six existing Convolutional Neural Networks in addition to transfer learning and opted for optimum performing architecture only for ensemble models. The performance evaluation on UCI and PICCOLO dataset of the proposed method in terms of accuracy (96.3%, 81.2%), precision (95.5%, 82.4%), recall (97.2%, 81.1%), F1-score (96.3%, 81.3%) and model reliability using Cohen’s Kappa Coefficient (0.94, 0.62) shows the superiority over existing models. The outcomes of experiments by other studies on the same dataset yielded 82.5% accuracy with 72.7% recall by SVM and 85.9% accuracy with 87.6% recall by other deep learning methods. The proposed method demonstrates that a weighted ensemble of optimised networks along with data augmentation significantly boosts the performance of deep learning-based CAD.
Collapse
|
32
|
Deiana AM, Tran N, Agar J, Blott M, Di Guglielmo G, Duarte J, Harris P, Hauck S, Liu M, Neubauer MS, Ngadiuba J, Ogrenci-Memik S, Pierini M, Aarrestad T, Bähr S, Becker J, Berthold AS, Bonventre RJ, Müller Bravo TE, Diefenthaler M, Dong Z, Fritzsche N, Gholami A, Govorkova E, Guo D, Hazelwood KJ, Herwig C, Khan B, Kim S, Klijnsma T, Liu Y, Lo KH, Nguyen T, Pezzullo G, Rasoulinezhad S, Rivera RA, Scholberg K, Selig J, Sen S, Strukov D, Tang W, Thais S, Unger KL, Vilalta R, von Krosigk B, Wang S, Warburton TK. Applications and Techniques for Fast Machine Learning in Science. Front Big Data 2022; 5:787421. [PMID: 35496379 PMCID: PMC9041419 DOI: 10.3389/fdata.2022.787421] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 01/31/2020] [Indexed: 01/10/2023] Open
Abstract
In this community review report, we discuss applications and techniques for fast machine learning (ML) in science-the concept of integrating powerful ML methods into the real-time experimental data processing loop to accelerate scientific discovery. The material for the report builds on two workshops held by the Fast ML for Science community and covers three main areas: applications for fast ML across a number of scientific domains; techniques for training and implementing performant and resource-efficient ML algorithms; and computing architectures, platforms, and technologies for deploying these algorithms. We also present overlapping challenges across the multiple scientific domains where common solutions can be found. This community report is intended to give plenty of examples and inspiration for scientific discovery through integrated and accelerated ML solutions. This is followed by a high-level overview and organization of technical advances, including an abundance of pointers to source material, which can enable these breakthroughs.
Collapse
Affiliation(s)
| | - Nhan Tran
- Fermi National Accelerator Laboratory, Batavia, IL, United States
- Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL, United States
| | - Joshua Agar
- Department of Materials Science and Engineering, Lehigh University, Bethlehem, PA, United States
| | | | | | - Javier Duarte
- Department of Physics, University of California, San Diego, San Diego, CA, United States
| | - Philip Harris
- Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Scott Hauck
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA, United States
| | - Mia Liu
- Department of Physics and Astronomy, Purdue University, West Lafayette, IN, United States
| | - Mark S. Neubauer
- Department of Physics, University of Illinois Urbana-Champaign, Champaign, IL, United States
| | | | - Seda Ogrenci-Memik
- Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL, United States
| | - Maurizio Pierini
- European Organization for Nuclear Research (CERN), Meyrin, Switzerland
| | - Thea Aarrestad
- European Organization for Nuclear Research (CERN), Meyrin, Switzerland
| | - Steffen Bähr
- Karlsruhe Institute of Technology, Karlsruhe, Germany
| | - Jürgen Becker
- Karlsruhe Institute of Technology, Karlsruhe, Germany
| | - Anne-Sophie Berthold
- Institute of Nuclear and Particle Physics, Technische Universität Dresden, Dresden, Germany
| | | | - Tomás E. Müller Bravo
- Department of Physics and Astronomy, University of Southampton, Southampton, United Kingdom
| | - Markus Diefenthaler
- Thomas Jefferson National Accelerator Facility, Newport News, VA, United States
| | - Zhen Dong
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, United States
| | - Nick Fritzsche
- Institute of Nuclear and Particle Physics, Technische Universität Dresden, Dresden, Germany
| | - Amir Gholami
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, United States
| | | | - Dongning Guo
- Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL, United States
| | | | - Christian Herwig
- Fermi National Accelerator Laboratory, Batavia, IL, United States
| | - Babar Khan
- Department of Computer Science, Technical University Darmstadt, Darmstadt, Germany
| | - Sehoon Kim
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, United States
| | - Thomas Klijnsma
- Fermi National Accelerator Laboratory, Batavia, IL, United States
| | - Yaling Liu
- Department of Bioengineering, Lehigh University, Bethlehem, PA, United States
| | - Kin Ho Lo
- Department of Physics, University of Florida, Gainesville, FL, United States
| | - Tri Nguyen
- Massachusetts Institute of Technology, Cambridge, MA, United States
| | | | | | - Ryan A. Rivera
- Fermi National Accelerator Laboratory, Batavia, IL, United States
| | - Kate Scholberg
- Department of Physics, Duke University, Durham, NC, United States
| | | | - Sougata Sen
- Birla Institute of Technology and Science, Pilani, India
| | - Dmitri Strukov
- Department of Electrical and Computer Engineering, University of California, Santa Barbara, Santa Barbara, CA, United States
| | - William Tang
- Department of Physics, Princeton University, Princeton, NJ, United States
| | - Savannah Thais
- Department of Physics, Princeton University, Princeton, NJ, United States
| | | | - Ricardo Vilalta
- Department of Computer Science, University of Houston, Houston, TX, United States
| | - Belina von Krosigk
- Karlsruhe Institute of Technology, Karlsruhe, Germany
- Department of Physics, Universität Hamburg, Hamburg, Germany
| | - Shen Wang
- Department of Physics, University of Florida, Gainesville, FL, United States
| | - Thomas K. Warburton
- Department of Physics and Astronomy, Iowa State University, Ames, IA, United States
| |
Collapse
|
33
|
Nogueira-Rodríguez A, Reboiro-Jato M, Glez-Peña D, López-Fernández H. Performance of Convolutional Neural Networks for Polyp Localization on Public Colonoscopy Image Datasets. Diagnostics (Basel) 2022; 12:898. [PMID: 35453946 PMCID: PMC9027927 DOI: 10.3390/diagnostics12040898] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 03/31/2022] [Accepted: 04/01/2022] [Indexed: 01/10/2023] Open
Abstract
Colorectal cancer is one of the most frequent malignancies. Colonoscopy is the de facto standard for precancerous lesion detection in the colon, i.e., polyps, during screening studies or after facultative recommendation. In recent years, artificial intelligence, and especially deep learning techniques such as convolutional neural networks, have been applied to polyp detection and localization in order to develop real-time CADe systems. However, the performance of machine learning models is very sensitive to changes in the nature of the testing instances, especially when trying to reproduce results for totally different datasets to those used for model development, i.e., inter-dataset testing. Here, we report the results of testing of our previously published polyp detection model using ten public colonoscopy image datasets and analyze them in the context of the results of other 20 state-of-the-art publications using the same datasets. The F1-score of our recently published model was 0.88 when evaluated on a private test partition, i.e., intra-dataset testing, but it decayed, on average, by 13.65% when tested on ten public datasets. In the published research, the average intra-dataset F1-score is 0.91, and we observed that it also decays in the inter-dataset setting to an average F1-score of 0.83.
Collapse
Affiliation(s)
- Alba Nogueira-Rodríguez
- CINBIO, Department of Computer Science, ESEI-Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain; (A.N.-R.); (M.R.-J.); (D.G.-P.)
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
| | - Miguel Reboiro-Jato
- CINBIO, Department of Computer Science, ESEI-Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain; (A.N.-R.); (M.R.-J.); (D.G.-P.)
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
| | - Daniel Glez-Peña
- CINBIO, Department of Computer Science, ESEI-Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain; (A.N.-R.); (M.R.-J.); (D.G.-P.)
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
| | - Hugo López-Fernández
- CINBIO, Department of Computer Science, ESEI-Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain; (A.N.-R.); (M.R.-J.); (D.G.-P.)
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
| |
Collapse
|
34
|
Picon A, Terradillos E, Sánchez-Peralta LF, Mattana S, Cicchi R, Blover BJ, Arbide N, Velasco J, Etzezarraga MC, Pavone FS, Garrote E, Saratxaga CL. Novel Pixelwise Co-Registered Hematoxylin-Eosin and Multiphoton Microscopy Image Dataset for Human Colon Lesion Diagnosis. J Pathol Inform 2022; 13:100012. [PMID: 35223136 PMCID: PMC8855324 DOI: 10.1016/j.jpi.2022.100012] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 01/09/2022] [Indexed: 12/29/2022] Open
Abstract
Colorectal cancer presents one of the most elevated incidences of cancer worldwide. Colonoscopy relies on histopathology analysis of hematoxylin-eosin (H&E) images of the removed tissue. Novel techniques such as multi-photon microscopy (MPM) show promising results for performing real-time optical biopsies. However, clinicians are not used to this imaging modality and correlation between MPM and H&E information is not clear. The objective of this paper is to describe and make publicly available an extensive dataset of fully co-registered H&E and MPM images that allows the research community to analyze the relationship between MPM and H&E histopathological images and the effect of the semantic gap that prevents clinicians from correctly diagnosing MPM images. The dataset provides a fully scanned tissue images at 10x optical resolution (0.5 µm/px) from 50 samples of lesions obtained by colonoscopies and colectomies. Diagnostics capabilities of TPF and H&E images were compared. Additionally, TPF tiles were virtually stained into H&E images by means of a deep-learning model. A panel of 5 expert pathologists evaluated the different modalities into three classes (healthy, adenoma/hyperplastic, and adenocarcinoma). Results showed that the performance of the pathologists over MPM images was 65% of the H&E performance while the virtual staining method achieved 90%. MPM imaging can provide appropriate information for diagnosing colorectal cancer without the need for H&E staining. However, the existing semantic gap among modalities needs to be corrected.
Collapse
Affiliation(s)
- Artzai Picon
- TECNALIA, Basque Research and Technology Alliance (BRTA), Astondo bidea, Edificio 700, 48160 Derio (Bizkaia), Spain.,University of the Basque Country UPV/EHU, Ingeniero Torres Quevedo Plaza, 1, 48013 Bilbao, Spain
| | - Elena Terradillos
- TECNALIA, Basque Research and Technology Alliance (BRTA), Astondo bidea, Edificio 700, 48160 Derio (Bizkaia), Spain
| | - Luisa F Sánchez-Peralta
- Centro de Cirugía de Mínima Invasión Jesús Usón, Carretera N-521, km. 41,8, 10071 Cáceres, Spain
| | - Sara Mattana
- National Institute of Optics, National Research Council (CNR-INO), Largo E. Fermi 6, 50125 Florence, Italy.,European Laboratory for Non-Linear Spectroscopy (LENS), Via N. Carrara 1, Sesto Fiorentino 50019, Italy
| | - Riccardo Cicchi
- National Institute of Optics, National Research Council (CNR-INO), Largo E. Fermi 6, 50125 Florence, Italy.,European Laboratory for Non-Linear Spectroscopy (LENS), Via N. Carrara 1, Sesto Fiorentino 50019, Italy
| | - Benjamin J Blover
- Department of Surgery and Cancer, Imperial College London, London, UK
| | - Nagore Arbide
- Osakidetza Basque Health Service, Basurto University Hospital, Department of Pathological Anatomy, Bilbao (Bizkaia), Spain
| | - Jacques Velasco
- Osakidetza Basque Health Service, Basurto University Hospital, Department of Pathological Anatomy, Bilbao (Bizkaia), Spain
| | - Mª Carmen Etzezarraga
- Osakidetza Basque Health Service, Basurto University Hospital, Department of Pathological Anatomy, Bilbao (Bizkaia), Spain
| | - Francesco S Pavone
- Department of Physics, University of Florence, Via G. Sansone 1, 50019 Sesto Fiorentino, Italy
| | - Estibaliz Garrote
- TECNALIA, Basque Research and Technology Alliance (BRTA), Astondo bidea, Edificio 700, 48160 Derio (Bizkaia), Spain
| | - Cristina L Saratxaga
- TECNALIA, Basque Research and Technology Alliance (BRTA), Astondo bidea, Edificio 700, 48160 Derio (Bizkaia), Spain
| |
Collapse
|
35
|
Nisha J, P. Gopi V, Palanisamy P. Automated colorectal polyp detection based on image enhancement and dual-path CNN architecture. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103465] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
36
|
Marques KF, Marques AF, Lopes MA, Beraldo RF, Lima TB, Sassaki LY. Artificial intelligence in colorectal cancer screening in patients with inflammatory bowel disease. Artif Intell Gastrointest Endosc 2022; 3:1-8. [DOI: 10.37126/aige.v3.i1.1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Revised: 02/14/2022] [Accepted: 02/24/2022] [Indexed: 02/06/2023] Open
|
37
|
Nisha JS, Gopi VP, Palanisamy P. AUTOMATED POLYP DETECTION IN COLONOSCOPY VIDEOS USING IMAGE ENHANCEMENT AND SALIENCY DETECTION ALGORITHM. BIOMEDICAL ENGINEERING: APPLICATIONS, BASIS AND COMMUNICATIONS 2022; 34. [DOI: 10.4015/s1016237222500016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/01/2025]
Abstract
Colonoscopy has proven to be an active diagnostic tool that examines the lower half of the digestive system’s anomalies. This paper confers a Computer-Aided Detection (CAD) method for polyps from colonoscopy images that helps to diagnose the early stage of Colorectal Cancer (CRC). The proposed method consists primarily of image enhancement, followed by the creation of a saliency map, feature extraction using the Histogram of Oriented-Gradients (HOG) feature extractor, and classification using the Support Vector Machine (SVM). We present an efficient image enhancement algorithm for highlighting clinically significant features in colonoscopy images. The proposed enhancement approach can improve the overall contrast and brightness by minimizing the effects of inconsistent illumination conditions. Detailed experiments have been conducted using the publicly available colonoscopy databases CVC ClinicDB, CVC ColonDB and the ETIS Larib. The performance measures are found to be in terms of precision (91.69%), recall (81.53%), F1-score (86.31%) and F2-score (89.45%) for the CVC ColonDB database and precision (90.29%), recall (61.73%), F1-score (73.32%) and F2-score (82.64%) for the ETIS Larib database. Comparison with the futuristic method shows that the proposed approach surpasses the existing one in terms of precision, F1-score, and F2-score. The proposed enhancement with saliency-based selection significantly reduced the number of search windows, resulting in an efficient polyp detection algorithm.
Collapse
Affiliation(s)
- J. S. Nisha
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, 620015, Tamil Nadu, India
| | - V. P. Gopi
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, 620015, Tamil Nadu, India
| | - P. Palanisamy
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, 620015, Tamil Nadu, India
| |
Collapse
|
38
|
Classification of the Confocal Microscopy Images of Colorectal Tumor and Inflammatory Colitis Mucosa Tissue Using Deep Learning. Diagnostics (Basel) 2022; 12:diagnostics12020288. [PMID: 35204379 PMCID: PMC8870781 DOI: 10.3390/diagnostics12020288] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 01/21/2022] [Accepted: 01/21/2022] [Indexed: 12/09/2022] Open
Abstract
Confocal microscopy image analysis is a useful method for neoplasm diagnosis. Many ambiguous cases are difficult to distinguish with the naked eye, thus leading to high inter-observer variability and significant time investments for learning this method. We aimed to develop a deep learning-based neoplasm classification model that classifies confocal microscopy images of 10× magnified colon tissues into three classes: neoplasm, inflammation, and normal tissue. ResNet50 with data augmentation and transfer learning approaches was used to efficiently train the model with limited training data. A class activation map was generated by using global average pooling to confirm which areas had a major effect on the classification. The proposed method achieved an accuracy of 81%, which was 14.05% more accurate than three machine learning-based methods and 22.6% better than the predictions made by four endoscopists. ResNet50 with data augmentation and transfer learning can be utilized to effectively identify neoplasm, inflammation, and normal tissue in confocal microscopy images. The proposed method outperformed three machine learning-based methods and identified the area that had a major influence on the results. Inter-observer variability and the time required for learning can be reduced if the proposed model is used with confocal microscopy image analysis for diagnosis.
Collapse
|
39
|
Sánchez-Peralta LF, Pagador JB, Sánchez-Margallo FM. Artificial Intelligence for Colorectal Polyps in Colonoscopy. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
40
|
Luca M, Ciobanu A, Barbu T, Drug V. Artificial Intelligence and Deep Learning, Important Tools in Assisting Gastroenterologists. INTELLIGENT SYSTEMS REFERENCE LIBRARY 2022:197-213. [DOI: 10.1007/978-3-030-79161-2_8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
|
41
|
Wang S, Yin Y, Wang D, Lv Z, Wang Y, Jin Y. An interpretable deep neural network for colorectal polyp diagnosis under colonoscopy. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.107568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
42
|
Viscaino M, Torres Bustos J, Muñoz P, Auat Cheein C, Cheein FA. Artificial intelligence for the early detection of colorectal cancer: A comprehensive review of its advantages and misconceptions. World J Gastroenterol 2021; 27:6399-6414. [PMID: 34720530 PMCID: PMC8517786 DOI: 10.3748/wjg.v27.i38.6399] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Revised: 04/26/2021] [Accepted: 09/14/2021] [Indexed: 02/06/2023] Open
Abstract
Colorectal cancer (CRC) was the second-ranked worldwide type of cancer during 2020 due to the crude mortality rate of 12.0 per 100000 inhabitants. It can be prevented if glandular tissue (adenomatous polyps) is detected early. Colonoscopy has been strongly recommended as a screening test for both early cancer and adenomatous polyps. However, it has some limitations that include the high polyp miss rate for smaller (< 10 mm) or flat polyps, which are easily missed during visual inspection. Due to the rapid advancement of technology, artificial intelligence (AI) has been a thriving area in different fields, including medicine. Particularly, in gastroenterology AI software has been included in computer-aided systems for diagnosis and to improve the assertiveness of automatic polyp detection and its classification as a preventive method for CRC. This article provides an overview of recent research focusing on AI tools and their applications in the early detection of CRC and adenomatous polyps, as well as an insightful analysis of the main advantages and misconceptions in the field.
Collapse
Affiliation(s)
- Michelle Viscaino
- Department of Electronic Engineering, Universidad Tecnica Federico Santa Maria, Valpaiso 2340000, Chile
| | - Javier Torres Bustos
- Department of Electronic Engineering, Universidad Tecnica Federico Santa Maria, Valpaiso 2340000, Chile
| | - Pablo Muñoz
- Hospital Clinico, University of Chile, Santiago 8380456, Chile
| | - Cecilia Auat Cheein
- Facultad de Medicina, Universidad Nacional de Santiago del Estero, Santiago del Estero 4200, Argentina
| | - Fernando Auat Cheein
- Department of Electronic Engineering, Universidad Técnica Federico Santa María, Valparaiso 2340000, Chile
| |
Collapse
|
43
|
Automated Bowel Polyp Detection Based on Actively Controlled Capsule Endoscopy: Feasibility Study. Diagnostics (Basel) 2021; 11:diagnostics11101878. [PMID: 34679575 PMCID: PMC8535114 DOI: 10.3390/diagnostics11101878] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 10/06/2021] [Accepted: 10/09/2021] [Indexed: 01/10/2023] Open
Abstract
This paper presents an active locomotion capsule endoscope system with 5D position sensing and real-time automated polyp detection for small-bowel and colon applications. An electromagnetic actuation system (EMA) consisting of stationary electromagnets is utilized to remotely control a magnetic capsule endoscope with multi-degree-of-freedom locomotion. For position sensing, an electronic system using a magnetic sensor array is built to track the position and orientation of the magnetic capsule during movement. The system is integrated with a deep learning model, named YOLOv3, which can automatically identify colorectal polyps in real-time with an average precision of 85%. The feasibility of the proposed method concerning active locomotion and localization is validated and demonstrated through in vitro experiments in a phantom duodenum. This study provides a high-potential solution for automatic diagnostics of the bowel and colon using an active locomotion capsule endoscope, which can be applied for a clinical site in the future.
Collapse
|
44
|
Nogueira-Rodríguez A, Domínguez-Carbajales R, Campos-Tato F, Herrero J, Puga M, Remedios D, Rivas L, Sánchez E, Iglesias Á, Cubiella J, Fdez-Riverola F, López-Fernández H, Reboiro-Jato M, Glez-Peña D. Real-time polyp detection model using convolutional neural networks. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06496-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
AbstractColorectal cancer is a major health problem, where advances towards computer-aided diagnosis (CAD) systems to assist the endoscopist can be a promising path to improvement. Here, a deep learning model for real-time polyp detection based on a pre-trained YOLOv3 (You Only Look Once) architecture and complemented with a post-processing step based on an object-tracking algorithm to reduce false positives is reported. The base YOLOv3 network was fine-tuned using a dataset composed of 28,576 images labelled with locations of 941 polyps that will be made public soon. In a frame-based evaluation using isolated images containing polyps, a general F1 score of 0.88 was achieved (recall = 0.87, precision = 0.89), with lower predictive performance in flat polyps, but higher for sessile, and pedunculated morphologies, as well as with the usage of narrow band imaging, whereas polyp size < 5 mm does not seem to have significant impact. In a polyp-based evaluation using polyp and normal mucosa videos, with a positive criterion defined as the presence of at least one 50-frames-length (window size) segment with a ratio of 75% of frames with predicted bounding boxes (frames positivity), 72.61% of sensitivity (95% CI 68.99–75.95) and 83.04% of specificity (95% CI 76.70–87.92) were achieved (Youden = 0.55, diagnostic odds ratio (DOR) = 12.98). When the positive criterion is less stringent (window size = 25, frames positivity = 50%), sensitivity reaches around 90% (sensitivity = 89.91%, 95% CI 87.20–91.94; specificity = 54.97%, 95% CI 47.49–62.24; Youden = 0.45; DOR = 10.76). The object-tracking algorithm has demonstrated a significant improvement in specificity whereas maintaining sensitivity, as well as a marginal impact on computational performance. These results suggest that the model could be effectively integrated into a CAD system.
Collapse
|
45
|
Durak S, Bayram B, Bakırman T, Erkut M, Doğan M, Gürtürk M, Akpınar B. Deep neural network approaches for detecting gastric polyps in endoscopic images. Med Biol Eng Comput 2021; 59:1563-1574. [PMID: 34259974 DOI: 10.1007/s11517-021-02398-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Accepted: 06/18/2021] [Indexed: 12/18/2022]
Abstract
Gastrointestinal endoscopy is the primary method used for the diagnosis and treatment of gastric polyps. The early detection and removal of polyps is vitally important in preventing cancer development. Many studies indicate that a high workload can contribute to misdiagnosing gastric polyps, even for experienced physicians. In this study, we aimed to establish a deep learning-based computer-aided diagnosis system for automatic gastric polyp detection. A private gastric polyp dataset was generated for this purpose consisting of 2195 endoscopic images and 3031 polyp labels. Retrospective gastrointestinal endoscopy data from the Karadeniz Technical University, Farabi Hospital, were used in the study. YOLOv4, CenterNet, EfficientNet, Cross Stage ResNext50-SPP, YOLOv3, YOLOv3-SPP, Single Shot Detection, and Faster Regional CNN deep learning models were implemented and assessed to determine the most efficient model for precancerous gastric polyp detection. The dataset was split 70% and 30% for training and testing all the implemented models. YOLOv4 was determined to be the most accurate model, with an 87.95% mean average precision. We also evaluated all the deep learning models using a public gastric polyp dataset as the test data. The results show that YOLOv4 has significant potential applicability in detecting gastric polyps and can be used effectively in gastrointestinal CAD systems. Gastric Polyp Detection Process using Deep Learning with Private Dataset.
Collapse
Affiliation(s)
- Serdar Durak
- Faculty of Medicine, Department of Gastroenterology, Karadeniz Technical University, Trabzon, Turkey
| | - Bülent Bayram
- Department of Geoinformatics, Yildiz Technical University, Istanbul, Turkey
| | - Tolga Bakırman
- Department of Geoinformatics, Yildiz Technical University, Istanbul, Turkey.
| | - Murat Erkut
- Faculty of Medicine, Department of Gastroenterology, Karadeniz Technical University, Trabzon, Turkey
| | - Metehan Doğan
- Department of Geoinformatics, Yildiz Technical University, Istanbul, Turkey
| | - Mert Gürtürk
- Department of Geoinformatics, Yildiz Technical University, Istanbul, Turkey
| | - Burak Akpınar
- Department of Geoinformatics, Yildiz Technical University, Istanbul, Turkey
| |
Collapse
|
46
|
Liew WS, Tang TB, Lin CH, Lu CK. Automatic colonic polyp detection using integration of modified deep residual convolutional neural network and ensemble learning approaches. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 206:106114. [PMID: 33984661 DOI: 10.1016/j.cmpb.2021.106114] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 04/07/2021] [Indexed: 05/10/2023]
Abstract
BACKGROUND AND OBJECTIVE The increased incidence of colorectal cancer (CRC) and its mortality rate have attracted interest in the use of artificial intelligence (AI) based computer-aided diagnosis (CAD) tools to detect polyps at an early stage. Although these CAD tools have thus far achieved a good accuracy level to detect polyps, they still have room to improve further (e.g. sensitivity). Therefore, a new CAD tool is developed in this study to detect colonic polyps accurately. METHODS In this paper, we propose a novel approach to distinguish colonic polyps by integrating several techniques, including a modified deep residual network, principal component analysis and AdaBoost ensemble learning. A powerful deep residual network architecture, ResNet-50, was investigated to reduce the computational time by altering its architecture. To keep the interference to a minimum, median filter, image thresholding, contrast enhancement, and normalisation techniques were exploited on the endoscopic images to train the classification model. Three publicly available datasets, i.e., Kvasir, ETIS-LaribPolypDB, and CVC-ClinicDB, were merged to train the model, which included images with and without polyps. RESULTS The proposed approach trained with a combination of three datasets achieved Matthews Correlation Coefficient (MCC) of 0.9819 with accuracy, sensitivity, precision, and specificity of 99.10%, 98.82%, 99.37%, and 99.38%, respectively. CONCLUSIONS These results show that our method could repeatedly classify endoscopic images automatically and could be used to effectively develop computer-aided diagnostic tools for early CRC detection.
Collapse
Affiliation(s)
- Win Sheng Liew
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, 32610 Seri Iskandar, Perak, Malaysia
| | - Tong Boon Tang
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, 32610 Seri Iskandar, Perak, Malaysia
| | - Cheng-Hung Lin
- Department of Electrical Engineering and Biomedical Engineering Research Center, Yuan Ze University, Jungli 32003, Taiwan
| | - Cheng-Kai Lu
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, 32610 Seri Iskandar, Perak, Malaysia.
| |
Collapse
|
47
|
Mitsala A, Tsalikidis C, Pitiakoudis M, Simopoulos C, Tsaroucha AK. Artificial Intelligence in Colorectal Cancer Screening, Diagnosis and Treatment. A New Era. ACTA ACUST UNITED AC 2021; 28:1581-1607. [PMID: 33922402 PMCID: PMC8161764 DOI: 10.3390/curroncol28030149] [Citation(s) in RCA: 122] [Impact Index Per Article: 30.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 04/09/2021] [Accepted: 04/20/2021] [Indexed: 12/24/2022]
Abstract
The development of artificial intelligence (AI) algorithms has permeated the medical field with great success. The widespread use of AI technology in diagnosing and treating several types of cancer, especially colorectal cancer (CRC), is now attracting substantial attention. CRC, which represents the third most commonly diagnosed malignancy in both men and women, is considered a leading cause of cancer-related deaths globally. Our review herein aims to provide in-depth knowledge and analysis of the AI applications in CRC screening, diagnosis, and treatment based on current literature. We also explore the role of recent advances in AI systems regarding medical diagnosis and therapy, with several promising results. CRC is a highly preventable disease, and AI-assisted techniques in routine screening represent a pivotal step in declining incidence rates of this malignancy. So far, computer-aided detection and characterization systems have been developed to increase the detection rate of adenomas. Furthermore, CRC treatment enters a new era with robotic surgery and novel computer-assisted drug delivery techniques. At the same time, healthcare is rapidly moving toward precision or personalized medicine. Machine learning models have the potential to contribute to individual-based cancer care and transform the future of medicine.
Collapse
Affiliation(s)
- Athanasia Mitsala
- Second Department of Surgery, University General Hospital of Alexandroupolis, Democritus University of Thrace Medical School, Dragana, 68100 Alexandroupolis, Greece; (C.T.); (M.P.); (C.S.)
- Correspondence: ; Tel.: +30-6986423707
| | - Christos Tsalikidis
- Second Department of Surgery, University General Hospital of Alexandroupolis, Democritus University of Thrace Medical School, Dragana, 68100 Alexandroupolis, Greece; (C.T.); (M.P.); (C.S.)
| | - Michail Pitiakoudis
- Second Department of Surgery, University General Hospital of Alexandroupolis, Democritus University of Thrace Medical School, Dragana, 68100 Alexandroupolis, Greece; (C.T.); (M.P.); (C.S.)
| | - Constantinos Simopoulos
- Second Department of Surgery, University General Hospital of Alexandroupolis, Democritus University of Thrace Medical School, Dragana, 68100 Alexandroupolis, Greece; (C.T.); (M.P.); (C.S.)
| | - Alexandra K. Tsaroucha
- Laboratory of Experimental Surgery & Surgical Research, Democritus University of Thrace Medical School, Dragana, 68100 Alexandroupolis, Greece;
| |
Collapse
|
48
|
Sánchez-Peralta LF, Pagador JB, Sánchez-Margallo FM. Artificial Intelligence for Colorectal Polyps in Colonoscopy. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_308-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
49
|
Golhar M, Bobrow TL, Khoshknab MP, Jit S, Ngamruengphong S, Durr NJ. Improving Colonoscopy Lesion Classification Using Semi-Supervised Deep Learning. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:631-640. [PMID: 33747680 PMCID: PMC7978231 DOI: 10.1109/access.2020.3047544] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
While data-driven approaches excel at many image analysis tasks, the performance of these approaches is often limited by a shortage of annotated data available for training. Recent work in semi-supervised learning has shown that meaningful representations of images can be obtained from training with large quantities of unlabeled data, and that these representations can improve the performance of supervised tasks. Here, we demonstrate that an unsupervised jigsaw learning task, in combination with supervised training, results in up to a 9.8% improvement in correctly classifying lesions in colonoscopy images when compared to a fully-supervised baseline. We additionally benchmark improvements in domain adaptation and out-of-distribution detection, and demonstrate that semi-supervised learning outperforms supervised learning in both cases. In colonoscopy applications, these metrics are important given the skill required for endoscopic assessment of lesions, the wide variety of endoscopy systems in use, and the homogeneity that is typical of labeled datasets.
Collapse
Affiliation(s)
- Mayank Golhar
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Taylor L Bobrow
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | | | - Simran Jit
- Division of Gastroenterology and Hepatology, Johns Hopkins Hospital, Baltimore, MD 21287, USA
| | - Saowanee Ngamruengphong
- Division of Gastroenterology and Hepatology, Johns Hopkins Hospital, Baltimore, MD 21287, USA
| | - Nicholas J Durr
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
50
|
PICCOLO White-Light and Narrow-Band Imaging Colonoscopic Dataset: A Performance Comparative of Models and Datasets. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10238501] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Colorectal cancer is one of the world leading death causes. Fortunately, an early diagnosis allows for effective treatment, increasing the survival rate. Deep learning techniques have shown their utility for increasing the adenoma detection rate at colonoscopy, but a dataset is usually required so the model can automatically learn features that characterize the polyps. In this work, we present the PICCOLO dataset, that comprises 3433 manually annotated images (2131 white-light images 1302 narrow-band images), originated from 76 lesions from 40 patients, which are distributed into training (2203), validation (897) and test (333) sets assuring patient independence between sets. Furthermore, clinical metadata are also provided for each lesion. Four different models, obtained by combining two backbones and two encoder–decoder architectures, are trained with the PICCOLO dataset and other two publicly available datasets for comparison. Results are provided for the test set of each dataset. Models trained with the PICCOLO dataset have a better generalization capacity, as they perform more uniformly along test sets of all datasets, rather than obtaining the best results for its own test set. This dataset is available at the website of the Basque Biobank, so it is expected that it will contribute to the further development of deep learning methods for polyp detection, localisation and classification, which would eventually result in a better and earlier diagnosis of colorectal cancer, hence improving patient outcomes.
Collapse
|