1
|
Macedo SEC, Freire MDBO, Kremer OS, Noal RB, Moraes FS, Cunha MAB. Machine learning algorithms applied to the diagnosis of COVID-19 based on epidemiological, clinical, and laboratory data. J Bras Pneumol 2025; 51:e20240385. [PMID: 40172414 PMCID: PMC12097737 DOI: 10.36416/1806-3756/e20240385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2024] [Accepted: 12/18/2024] [Indexed: 04/04/2025] Open
Abstract
OBJECTIVE To predict COVID-19 in hospitalized patients with SARS in a city in southern Brazil by using machine learning algorithms. METHODS The study sample consisted of patients ≥ 18 years of age admitted to the emergency department with SARS and hospitalized in the Hospital Escola - Universidade Federal de Pelotas between March and December of 2020. Epidemiological, clinical, and laboratory data were processed by machine learning algorithms in order to identify patterns. Mean AUC values were calculated for each combination of model and oversampling/undersampling techniques during cross-validation. RESULTS Of a total of 100 hospitalized patients with SARS, 78 had information for RT-PCR testing for SARS-CoV-2 infection and were therefore included in the analysis. Most (58%) of the patients were female, and the mean age was 61.4 ± 15.8 years. Regarding the machine learning models, the random forest model had a slightly higher median performance when compared with the other models tested and was therefore adopted. The most important features to diagnose COVID-19 were leukocyte count, PaCO2, troponin levels, duration of symptoms in days, platelet count, multimorbidity, presence of band forms, urea levels, age, and D-dimer levels, with an AUC of 87%. CONCLUSIONS Artificial intelligence techniques represent an efficient strategy to identify patients with high clinical suspicion, particularly in situations in which health care systems face intense strain, such as in the COVID-19 pandemic.
Collapse
Affiliation(s)
| | | | - Oscar Schmitt Kremer
- . Instituto Federal de Educação, Ciência e Tecnologia Sul-rio-grandense, Campus Pelotas, Pelotas (RS) Brasil
| | - Ricardo Bica Noal
- . Faculdade de Medicina, Universidade Federal de Pelotas, Pelotas (RS) Brasil
| | - Fabiano Sandrini Moraes
- . Instituto Federal de Educação, Ciência e Tecnologia Sul-rio-grandense, Campus Pelotas, Pelotas (RS) Brasil
| | - Mauro André Barbosa Cunha
- . Instituto Federal de Educação, Ciência e Tecnologia Sul-rio-grandense, Campus Pelotas, Pelotas (RS) Brasil
| |
Collapse
|
2
|
Aggarwal S, Gupta I, Kumar A, Kautish S, Almazyad AS, Mohamed AW, Werner F, Shokouhifar M. GastroFuse-Net: an ensemble deep learning framework designed for gastrointestinal abnormality detection in endoscopic images. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:6847-6869. [PMID: 39483096 DOI: 10.3934/mbe.2024300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/03/2024]
Abstract
Convolutional Neural Networks (CNNs) have received substantial attention as a highly effective tool for analyzing medical images, notably in interpreting endoscopic images, due to their capacity to provide results equivalent to or exceeding those of medical specialists. This capability is particularly crucial in the realm of gastrointestinal disorders, where even experienced gastroenterologists find the automatic diagnosis of such conditions using endoscopic pictures to be a challenging endeavor. Currently, gastrointestinal findings in medical diagnosis are primarily determined by manual inspection by competent gastrointestinal endoscopists. This evaluation procedure is labor-intensive, time-consuming, and frequently results in high variability between laboratories. To address these challenges, we introduced a specialized CNN-based architecture called GastroFuse-Net, designed to recognize human gastrointestinal diseases from endoscopic images. GastroFuse-Net was developed by combining features extracted from two different CNN models with different numbers of layers, integrating shallow and deep representations to capture diverse aspects of the abnormalities. The Kvasir dataset was used to thoroughly test the proposed deep learning model. This dataset contained images that were classified according to structures (cecum, z-line, pylorus), diseases (ulcerative colitis, esophagitis, polyps), or surgical operations (dyed resection margins, dyed lifted polyps). The proposed model was evaluated using various measures, including specificity, recall, precision, F1-score, Mathew's Correlation Coefficient (MCC), and accuracy. The proposed model GastroFuse-Net exhibited exceptional performance, achieving a precision of 0.985, recall of 0.985, specificity of 0.984, F1-score of 0.997, MCC of 0.982, and an accuracy of 98.5%.
Collapse
Affiliation(s)
- Sonam Aggarwal
- Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
| | - Isha Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
| | - Ashok Kumar
- Model Institute of Engineering and Technology, Jammu, J&K, India
| | | | - Abdulaziz S Almazyad
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia
| | - Ali Wagdy Mohamed
- Operations Research Department, Faculty of Graduate Studies for Statistical Research, Cairo University, Giza 12613, Egypt
- Applied Science Research Center, Applied Science Private University, Amman 11931, Jordan
| | - Frank Werner
- Faculty of Mathematics, Otto-von-Guericke University, Magdeburg 39016, Germany
| | - Mohammad Shokouhifar
- Institute of Research and Development, Duy Tan University, Da Nang 550000, Vietnam
| |
Collapse
|
3
|
Mudavadkar GR, Deng M, Al-Heejawi SMA, Arora IH, Breggia A, Ahmad B, Christman R, Ryan ST, Amal S. Gastric Cancer Detection with Ensemble Learning on Digital Pathology: Use Case of Gastric Cancer on GasHisSDB Dataset. Diagnostics (Basel) 2024; 14:1746. [PMID: 39202233 PMCID: PMC11354078 DOI: 10.3390/diagnostics14161746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2024] [Revised: 08/01/2024] [Accepted: 08/02/2024] [Indexed: 09/03/2024] Open
Abstract
Gastric cancer has become a serious worldwide health concern, emphasizing the crucial importance of early diagnosis measures to improve patient outcomes. While traditional histological image analysis is regarded as the clinical gold standard, it is labour intensive and manual. In recognition of this problem, there has been a rise in interest in the use of computer-aided diagnostic tools to help pathologists with their diagnostic efforts. In particular, deep learning (DL) has emerged as a promising solution in this sector. However, current DL models are still restricted in their ability to extract extensive visual characteristics for correct categorization. To address this limitation, this study proposes the use of ensemble models, which incorporate the capabilities of several deep-learning architectures and use aggregate knowledge of many models to improve classification performance, allowing for more accurate and efficient gastric cancer detection. To determine how well these proposed models performed, this study compared them with other works, all of which were based on the Gastric Histopathology Sub-Size Images Database, a publicly available dataset for gastric cancer. This research demonstrates that the ensemble models achieved a high detection accuracy across all sub-databases, with an average accuracy exceeding 99%. Specifically, ResNet50, VGGNet, and ResNet34 performed better than EfficientNet and VitNet. For the 80 × 80-pixel sub-database, ResNet34 exhibited an accuracy of approximately 93%, VGGNet achieved 94%, and the ensemble model excelled with 99%. In the 120 × 120-pixel sub-database, the ensemble model showed 99% accuracy, VGGNet 97%, and ResNet50 approximately 97%. For the 160 × 160-pixel sub-database, the ensemble model again achieved 99% accuracy, VGGNet 98%, ResNet50 98%, and EfficientNet 92%, highlighting the ensemble model's superior performance across all resolutions. Overall, the ensemble model consistently provided an accuracy of 99% across the three sub-pixel categories. These findings show that ensemble models may successfully detect critical characteristics from smaller patches and achieve high performance. The findings will help pathologists diagnose gastric cancer using histopathological images, leading to earlier identification and higher patient survival rates.
Collapse
Affiliation(s)
- Govind Rajesh Mudavadkar
- College of Engineering, Northeastern University, Boston, MA 02115, USA; (G.R.M.); (M.D.); (S.M.A.A.-H.)
| | - Mo Deng
- College of Engineering, Northeastern University, Boston, MA 02115, USA; (G.R.M.); (M.D.); (S.M.A.A.-H.)
| | | | - Isha Hemant Arora
- Khoury College of Computer Sciences, Northeastern University, Boston, MA 02115, USA;
| | - Anne Breggia
- MaineHealth Institute for Research, Scarborough, ME 04074, USA
| | - Bilal Ahmad
- Maine Medical Center, Portland, ME 04102, USA; (B.A.); (R.C.); (S.T.R.)
| | - Robert Christman
- Maine Medical Center, Portland, ME 04102, USA; (B.A.); (R.C.); (S.T.R.)
| | - Stephen T. Ryan
- Maine Medical Center, Portland, ME 04102, USA; (B.A.); (R.C.); (S.T.R.)
| | - Saeed Amal
- The Roux Institute, Department of Bioengineering, College of Engineering at Northeastern University, Boston, MA 02115, USA
| |
Collapse
|
4
|
Sivari E, Bostanci E, Guzel MS, Acici K, Asuroglu T, Ercelebi Ayyildiz T. A New Approach for Gastrointestinal Tract Findings Detection and Classification: Deep Learning-Based Hybrid Stacking Ensemble Models. Diagnostics (Basel) 2023; 13:diagnostics13040720. [PMID: 36832205 PMCID: PMC9954881 DOI: 10.3390/diagnostics13040720] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Revised: 02/06/2023] [Accepted: 02/10/2023] [Indexed: 02/17/2023] Open
Abstract
Endoscopic procedures for diagnosing gastrointestinal tract findings depend on specialist experience and inter-observer variability. This variability can cause minor lesions to be missed and prevent early diagnosis. In this study, deep learning-based hybrid stacking ensemble modeling has been proposed for detecting and classifying gastrointestinal system findings, aiming at early diagnosis with high accuracy and sensitive measurements and saving workload to help the specialist and objectivity in endoscopic diagnosis. In the first level of the proposed bi-level stacking ensemble approach, predictions are obtained by applying 5-fold cross-validation to three new CNN models. A machine learning classifier selected at the second level is trained according to the obtained predictions, and the final classification result is reached. The performances of the stacking models were compared with the performances of the deep learning models, and McNemar's statistical test was applied to support the results. According to the experimental results, stacking ensemble models performed with a significant difference with 98.42% ACC and 98.19% MCC in the KvasirV2 dataset and 98.53% ACC and 98.39% MCC in the HyperKvasir dataset. This study is the first to offer a new learning-oriented approach that efficiently evaluates CNN features and provides objective and reliable results with statistical testing compared to state-of-the-art studies on the subject. The proposed approach improves the performance of deep learning models and outperforms the state-of-the-art studies in the literature.
Collapse
Affiliation(s)
- Esra Sivari
- Department of Computer Engineering, Cankiri Karatekin University, Cankiri 18100, Turkey
| | - Erkan Bostanci
- Department of Computer Engineering, Ankara University, Ankara 06830, Turkey
| | | | - Koray Acici
- Department of Artificial Intelligence and Data Engineering, Ankara University, Ankara 06830, Turkey
| | - Tunc Asuroglu
- Faculty of Medicine and Health Technology, Tampere University, 33720 Tampere, Finland
- Correspondence:
| | | |
Collapse
|
5
|
Houwen BBSL, Nass KJ, Vleugels JLA, Fockens P, Hazewinkel Y, Dekker E. Comprehensive review of publicly available colonoscopic imaging databases for artificial intelligence research: availability, accessibility, and usability. Gastrointest Endosc 2023; 97:184-199.e16. [PMID: 36084720 DOI: 10.1016/j.gie.2022.08.043] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 08/24/2022] [Accepted: 08/30/2022] [Indexed: 01/28/2023]
Abstract
BACKGROUND AND AIMS Publicly available databases containing colonoscopic imaging data are valuable resources for artificial intelligence (AI) research. Currently, little is known regarding the available number and content of these databases. This review aimed to describe the availability, accessibility, and usability of publicly available colonoscopic imaging databases, focusing on polyp detection, polyp characterization, and quality of colonoscopy. METHODS A systematic literature search was performed in MEDLINE and Embase to identify AI studies describing publicly available colonoscopic imaging databases published after 2010. Second, a targeted search using Google's Dataset Search, Google Search, GitHub, and Figshare was done to identify databases directly. Databases were included if they contained data about polyp detection, polyp characterization, or quality of colonoscopy. To assess accessibility of databases, the following categories were defined: open access, open access with barriers, and regulated access. To assess the potential usability of the included databases, essential details of each database were extracted using a checklist derived from the Checklist for Artificial Intelligence in Medical Imaging. RESULTS We identified 22 databases with open access, 3 databases with open access with barriers, and 15 databases with regulated access. The 22 open access databases contained 19,463 images and 952 videos. Nineteen of these databases focused on polyp detection, localization, and/or segmentation; 6 on polyp characterization, and 3 on quality of colonoscopy. Only half of these databases have been used by other researcher to develop, train, or benchmark their AI system. Although technical details were in general well reported, important details such as polyp and patient demographics and the annotation process were under-reported in almost all databases. CONCLUSIONS This review provides greater insight on public availability of colonoscopic imaging databases for AI research. Incomplete reporting of important details limits the ability of researchers to assess the usability of current databases.
Collapse
Affiliation(s)
- Britt B S L Houwen
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Karlijn J Nass
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Jasper L A Vleugels
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Paul Fockens
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Yark Hazewinkel
- Department of Gastroenterology and Hepatology, Radboud University Nijmegen Medical Center, Radboud University of Nijmegen, Nijmegen, the Netherlands
| | - Evelien Dekker
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
6
|
Automatic detection of crohn disease in wireless capsule endoscopic images using a deep convolutional neural network. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04146-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
AbstractThe diagnosis of Crohn’s disease (CD) in the small bowel is generally performed by observing a very large number of images captured by capsule endoscopy (CE). This diagnostic technique entails a heavy workload for the specialists in terms of time spent reviewing the images. This paper presents a convolutional neural network capable of classifying the CE images to identify those ones affected by lesions indicative of the disease. The architecture of the proposed network was custom designed to solve this image classification problem. This allowed different design decisions to be made with the aim of improving its performance in terms of accuracy and processing speed compared to other state-of-the-art deep-learning-based reference architectures. The experimentation was carried out on a set of 15,972 images extracted from 31 CE videos of patients affected by CD, 7,986 of which showed lesions associated with the disease. The training, validation/selection and evaluation of the network was performed on 70%, 10% and 20% of the total images, respectively. The ROC curve obtained on the test image set has an area greater than 0.997, with points in a 95-99% sensitivity range associated with specificities of 99-96%. These figures are higher than those achieved by EfficientNet-B5, VGG-16, Xception or ResNet networks which also require an average processing time per image significantly higher than the one needed in the proposed architecture. Therefore, the network outlined in this paper is proving to be sufficiently promising to be considered for integration into tools used by specialists in their diagnosis of CD. In the sample of images analysed, the network was able to detect 99% of the images with lesions, filtering out for specialist review 96% of those with no signs of disease.
Collapse
|
7
|
Fati SM, Senan EM, Azar AT. Hybrid and Deep Learning Approach for Early Diagnosis of Lower Gastrointestinal Diseases. SENSORS (BASEL, SWITZERLAND) 2022; 22:4079. [PMID: 35684696 PMCID: PMC9185306 DOI: 10.3390/s22114079] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 05/21/2022] [Accepted: 05/24/2022] [Indexed: 05/27/2023]
Abstract
Every year, nearly two million people die as a result of gastrointestinal (GI) disorders. Lower gastrointestinal tract tumors are one of the leading causes of death worldwide. Thus, early detection of the type of tumor is of great importance in the survival of patients. Additionally, removing benign tumors in their early stages has more risks than benefits. Video endoscopy technology is essential for imaging the GI tract and identifying disorders such as bleeding, ulcers, polyps, and malignant tumors. Videography generates 5000 frames, which require extensive analysis and take a long time to follow all frames. Thus, artificial intelligence techniques, which have a higher ability to diagnose and assist physicians in making accurate diagnostic decisions, solve these challenges. In this study, many multi-methodologies were developed, where the work was divided into four proposed systems; each system has more than one diagnostic method. The first proposed system utilizes artificial neural networks (ANN) and feed-forward neural networks (FFNN) algorithms based on extracting hybrid features by three algorithms: local binary pattern (LBP), gray level co-occurrence matrix (GLCM), and fuzzy color histogram (FCH) algorithms. The second proposed system uses pre-trained CNN models which are the GoogLeNet and AlexNet based on the extraction of deep feature maps and their classification with high accuracy. The third proposed method uses hybrid techniques consisting of two blocks: the first block of CNN models (GoogLeNet and AlexNet) to extract feature maps; the second block is the support vector machine (SVM) algorithm for classifying deep feature maps. The fourth proposed system uses ANN and FFNN based on the hybrid features between CNN models (GoogLeNet and AlexNet) and LBP, GLCM and FCH algorithms. All the proposed systems achieved superior results in diagnosing endoscopic images for the early detection of lower gastrointestinal diseases. All systems produced promising results; the FFNN classifier based on the hybrid features extracted by GoogLeNet, LBP, GLCM and FCH achieved an accuracy of 99.3%, precision of 99.2%, sensitivity of 99%, specificity of 100%, and AUC of 99.87%.
Collapse
Affiliation(s)
- Suliman Mohamed Fati
- College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia;
| | - Ebrahim Mohammed Senan
- Department of Computer Science & Information Technology, Dr. Babasaheb Ambedkar Marathwada University, Aurangabad 431004, India;
| | - Ahmad Taher Azar
- College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia;
- Faculty of Computers and Artificial Intelligence, Benha University, Benha 13518, Egypt
| |
Collapse
|
8
|
Wang W, Yang X, Li X, Tang J. Convolutional‐capsule network for gastrointestinal endoscopy image classification. INT J INTELL SYST 2022. [DOI: 10.1002/int.22815] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Affiliation(s)
- Wei Wang
- School of Computer Science and Engineering, Nanjing University of Science and Technology Nanjing Jiangsu China
| | - Xin Yang
- School of Electronic Information and Communications, Huazhong University of Science and Technology Wuhan Hubei China
| | - Xin Li
- Department of Radiology Union Hospital, Tongji Medical College, Huazhong University of Science and Technology Wuhan Hubei China
- Hubei Province Key Laboratory of Molecular Imaging Wuhan Hubei China
| | - Jinhui Tang
- School of Computer Science and Engineering, Nanjing University of Science and Technology Nanjing Jiangsu China
| |
Collapse
|
9
|
Artificial Intelligence in Gastroenterology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-58080-3_163-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
10
|
Strümke I, Hicks SA, Thambawita V, Jha D, Parasa S, Riegler MA, Halvorsen P. Artificial Intelligence in Gastroenterology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
11
|
Cao B, Zhang KC, Wei B, Chen L. Status quo and future prospects of artificial neural network from the perspective of gastroenterologists. World J Gastroenterol 2021; 27:2681-2709. [PMID: 34135549 PMCID: PMC8173384 DOI: 10.3748/wjg.v27.i21.2681] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 03/29/2021] [Accepted: 04/21/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial neural networks (ANNs) are one of the primary types of artificial intelligence and have been rapidly developed and used in many fields. In recent years, there has been a sharp increase in research concerning ANNs in gastrointestinal (GI) diseases. This state-of-the-art technique exhibits excellent performance in diagnosis, prognostic prediction, and treatment. Competitions between ANNs and GI experts suggest that efficiency and accuracy might be compatible in virtue of technique advancements. However, the shortcomings of ANNs are not negligible and may induce alterations in many aspects of medical practice. In this review, we introduce basic knowledge about ANNs and summarize the current achievements of ANNs in GI diseases from the perspective of gastroenterologists. Existing limitations and future directions are also proposed to optimize ANN's clinical potential. In consideration of barriers to interdisciplinary knowledge, sophisticated concepts are discussed using plain words and metaphors to make this review more easily understood by medical practitioners and the general public.
Collapse
Affiliation(s)
- Bo Cao
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Ke-Cheng Zhang
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Bo Wei
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Lin Chen
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| |
Collapse
|
12
|
Saito H, Tanimoto T, Ozawa T, Ishihara S, Fujishiro M, Shichijo S, Hirasawa D, Matsuda T, Endo Y, Tada T. Automatic anatomical classification of colonoscopic images using deep convolutional neural networks. Gastroenterol Rep (Oxf) 2021; 9:226-233. [PMID: 34316372 PMCID: PMC8309686 DOI: 10.1093/gastro/goaa078] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Revised: 03/25/2020] [Accepted: 05/13/2020] [Indexed: 01/10/2023] Open
Abstract
BACKGROUND A colonoscopy can detect colorectal diseases, including cancers, polyps, and inflammatory bowel diseases. A computer-aided diagnosis (CAD) system using deep convolutional neural networks (CNNs) that can recognize anatomical locations during a colonoscopy could efficiently assist practitioners. We aimed to construct a CAD system using a CNN to distinguish colorectal images from parts of the cecum, ascending colon, transverse colon, descending colon, sigmoid colon, and rectum. METHOD We constructed a CNN by training of 9,995 colonoscopy images and tested its performance by 5,121 independent colonoscopy images that were categorized according to seven anatomical locations: the terminal ileum, the cecum, ascending colon to transverse colon, descending colon to sigmoid colon, the rectum, the anus, and indistinguishable parts. We examined images taken during total colonoscopy performed between January 2017 and November 2017 at a single center. We evaluated the concordance between the diagnosis by endoscopists and those by the CNN. The main outcomes of the study were the sensitivity and specificity of the CNN for the anatomical categorization of colonoscopy images. RESULTS The constructed CNN recognized anatomical locations of colonoscopy images with the following areas under the curves: 0.979 for the terminal ileum; 0.940 for the cecum; 0.875 for ascending colon to transverse colon; 0.846 for descending colon to sigmoid colon; 0.835 for the rectum; and 0.992 for the anus. During the test process, the CNN system correctly recognized 66.6% of images. CONCLUSION We constructed the new CNN system with clinically relevant performance for recognizing anatomical locations of colonoscopy images, which is the first step in constructing a CAD system that will support us during colonoscopy and provide an assurance of the quality of the colonoscopy procedure.
Collapse
Affiliation(s)
- Hiroaki Saito
- Department of Gastroenterology, Sendai Kousei Hospital, Miyagi, Japan
| | | | - Tsuyoshi Ozawa
- Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan
- Department of Surgery, Teikyo University School of Medicine, Tokyo, Japan
| | - Soichiro Ishihara
- Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan
- Department of Surgical Oncology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Mitsuhiro Fujishiro
- Department of Gastroenterology and Hepatology, Nagoya University Graduate School of Medicine, Aichi, Japan
| | - Satoki Shichijo
- Department of Gastrointestinal Oncology, Osaka International Cancer Institute, Osaka, Japan
| | - Dai Hirasawa
- Department of Gastroenterology, Sendai Kousei Hospital, Miyagi, Japan
| | - Tomoki Matsuda
- Department of Gastroenterology, Sendai Kousei Hospital, Miyagi, Japan
| | - Yuma Endo
- AI Medical Service, Inc., Tokyo, Japan
| | - Tomohiro Tada
- Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan
- Department of Surgical Oncology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
- AI Medical Service, Inc., Tokyo, Japan
| |
Collapse
|
13
|
Artificial Intelligence in Medicine. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_163-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
14
|
Naz J, Sharif M, Yasmin M, Raza M, Khan MA. Detection and Classification of Gastrointestinal Diseases using Machine Learning. Curr Med Imaging 2021; 17:479-490. [PMID: 32988355 DOI: 10.2174/1573405616666200928144626] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2020] [Revised: 07/07/2020] [Accepted: 07/23/2020] [Indexed: 12/22/2022]
Abstract
BACKGROUND Traditional endoscopy is an invasive and painful method of examining the gastrointestinal tract (GIT) not supported by physicians and patients. To handle this issue, video endoscopy (VE) or wireless capsule endoscopy (WCE) is recommended and utilized for GIT examination. Furthermore, manual assessment of captured images is not possible for an expert physician because it's a time taking task to analyze thousands of images thoroughly. Hence, there comes the need for a Computer-Aided-Diagnosis (CAD) method to help doctors analyze images. Many researchers have proposed techniques for automated recognition and classification of abnormality in captured images. METHODS In this article, existing methods for automated classification, segmentation and detection of several GI diseases are discussed. Paper gives a comprehensive detail about these state-of-theart methods. Furthermore, literature is divided into several subsections based on preprocessing techniques, segmentation techniques, handcrafted features based techniques and deep learning based techniques. Finally, issues, challenges and limitations are also undertaken. RESULTS A comparative analysis of different approaches for the detection and classification of GI infections. CONCLUSION This comprehensive review article combines information related to a number of GI diseases diagnosis methods at one place. This article will facilitate the researchers to develop new algorithms and approaches for early detection of GI diseases detection with more promising results as compared to the existing ones of literature.
Collapse
Affiliation(s)
- Javeria Naz
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mussarat Yasmin
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mudassar Raza
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | | |
Collapse
|
15
|
Kwon O, Yong TH, Kang SR, Kim JE, Huh KH, Heo MS, Lee SS, Choi SC, Yi WJ. Automatic diagnosis for cysts and tumors of both jaws on panoramic radiographs using a deep convolution neural network. Dentomaxillofac Radiol 2020; 49:20200185. [PMID: 32574113 PMCID: PMC7719862 DOI: 10.1259/dmfr.20200185] [Citation(s) in RCA: 86] [Impact Index Per Article: 17.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Revised: 05/29/2020] [Accepted: 06/11/2020] [Indexed: 12/13/2022] Open
Abstract
OBJECTIVES The purpose of this study was to automatically diagnose odontogenic cysts and tumors of both jaws on panoramic radiographs using deep learning. We proposed a novel framework of deep convolution neural network (CNN) with data augmentation for detection and classification of the multiple diseases. METHODS We developed a deep CNN modified from YOLOv3 for detecting and classifying odontogenic cysts and tumors of both jaws. Our data set of 1282 panoramic radiographs comprised 350 dentigerous cysts (DCs), 302 periapical cysts (PCs), 300 odontogenic keratocysts (OKCs), 230 ameloblastomas (ABs), and 100 normal jaws with no disease. In addition, the number of radiographs was augmented 12-fold by flip, rotation, and intensity changes. We evaluated the classification performance of the developed CNN by calculating sensitivity, specificity, accuracy, and area under the curve (AUC) for diseases of both jaws. RESULTS The overall classification performance for the diseases improved from 78.2% sensitivity, 93.9% specificity,91.3% accuracy, and 0.86 AUC using the CNN with unaugmented data set to 88.9% sensitivity, 97.2% specificity, 95.6% accuracy, and 0.94 AUC using the CNN with augmented data set. CNN using augmented data set had the following sensitivities, specificities, accuracies, and AUCs: 91.4%, 99.2%, 97.8%, and 0.96 for DCs, 82.8%, 99.2%, 96.2%, and 0.92 for PCs, 98.4%,92.3%,94.0%, and 0.97 for OKCs, 71.7%, 100%, 94.3%, and 0.86 for ABs, and 100.0%, 95.1%, 96.0%, and 0.97 for normal jaws, respectively. CONCLUSION The CNN method we developed for automatically diagnosing odontogenic cysts and tumors of both jaws on panoramic radiographs using data augmentation showed high sensitivity, specificity, accuracy, and AUC despite the limited number of panoramic images involved.
Collapse
Affiliation(s)
- Odeuk Kwon
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Tae-Hoon Yong
- Department of Biomedical Radiation Sciences, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, South Korea
| | - Se-Ryong Kang
- Department of Biomedical Radiation Sciences, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, South Korea
| | - Jo-Eun Kim
- Department of Oral and Maxillofacial Radiology, Seoul National University Dental Hospital, Seoul, South Korea
| | - Kyung-Hoe Huh
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, BK21, Seoul National University, Seoul, South Korea
| | - Min-Suk Heo
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, BK21, Seoul National University, Seoul, South Korea
| | - Sam-Sun Lee
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, BK21, Seoul National University, Seoul, South Korea
| | - Soon-Chul Choi
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, BK21, Seoul National University, Seoul, South Korea
| | | |
Collapse
|
16
|
Rahim T, Usman MA, Shin SY. A survey on contemporary computer-aided tumor, polyp, and ulcer detection methods in wireless capsule endoscopy imaging. Comput Med Imaging Graph 2020; 85:101767. [DOI: 10.1016/j.compmedimag.2020.101767] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Revised: 07/13/2020] [Accepted: 07/18/2020] [Indexed: 12/12/2022]
|
17
|
Viswanath YKS, Vaze S, Bird R. Application of convolutional neural networks for computer-aided detection and diagnosis in gastrointestinal pathology: A simplified exposition for an endoscopist. Artif Intell Gastrointest Endosc 2020; 1:1-5. [DOI: 10.37126/aige.v1.i1.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Revised: 07/14/2020] [Accepted: 07/16/2020] [Indexed: 02/06/2023] Open
|
18
|
Jin EH, Lee D, Bae JH, Kang HY, Kwak MS, Seo JY, Yang JI, Yang SY, Lim SH, Yim JY, Lim JH, Chung GE, Chung SJ, Choi JM, Han YM, Kang SJ, Lee J, Chan Kim H, Kim JS. Improved Accuracy in Optical Diagnosis of Colorectal Polyps Using Convolutional Neural Networks with Visual Explanations. Gastroenterology 2020; 158:2169-2179.e8. [PMID: 32119927 DOI: 10.1053/j.gastro.2020.02.036] [Citation(s) in RCA: 87] [Impact Index Per Article: 17.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/19/2019] [Revised: 01/10/2020] [Accepted: 02/20/2020] [Indexed: 02/08/2023]
Abstract
BACKGROUND & AIMS Narrow-band imaging (NBI) can be used to determine whether colorectal polyps are adenomatous or hyperplastic. We investigated whether an artificial intelligence (AI) system can increase the accuracy of characterizations of polyps by endoscopists of different skill levels. METHODS We developed convolutional neural networks (CNNs) for evaluation of diminutive colorectal polyps, based on efficient neural architecture searches via parameter sharing with augmentation using NBIs of diminutive (≤5 mm) polyps, collected from October 2015 through October 2017 at the Seoul National University Hospital, Healthcare System Gangnam Center (training set). We trained the CNN using images from 1100 adenomatous polyps and 1050 hyperplastic polyps from 1379 patients. We then tested the system using 300 images of 180 adenomatous polyps and 120 hyperplastic polyps, obtained from January 2018 to May 2019. We compared the accuracy of 22 endoscopists of different skill levels (7 novices, 4 experts, and 11 NBI-trained experts) vs the CNN in evaluation of images (adenomatous vs hyperplastic) from 180 adenomatous and 120 hyperplastic polyps. The endoscopists then evaluated the polyp images with knowledge of the CNN-processed results. We conducted mixed-effect logistic and linear regression analyses to determine the effects of AI assistance on the accuracy of analysis of diminutive colorectal polyps by endoscopists (primary outcome). RESULTS The CNN distinguished adenomatous vs hyperplastic diminutive polyps with 86.7% accuracy, based on histologic analysis as the reference standard. Endoscopists distinguished adenomatous vs hyperplastic diminutive polyps with 82.5% overall accuracy (novices, 73.8% accuracy; experts, 83.8% accuracy; and NBI-trained experts, 87.6% accuracy). With knowledge of the CNN-processed results, the overall accuracy of the endoscopists increased to 88.5% (P < .05). With knowledge of the CNN-processed results, the accuracy of novice endoscopists increased to 85.6% (P < .05). The CNN-processed results significantly reduced endoscopist time of diagnosis (from 3.92 to 3.37 seconds per polyp, P = .042). CONCLUSIONS We developed a CNN that significantly increases the accuracy of evaluation of diminutive colorectal polyps (as adenomatous vs hyperplastic) and reduces the time of diagnosis by endoscopists. This AI assistance system significantly increased the accuracy of analysis by novice endoscopists, who achieved near-expert levels of accuracy without extra training. The CNN assistance system can reduce the skill-level dependence of endoscopists and costs.
Collapse
Affiliation(s)
- Eun Hyo Jin
- Department of Internal Medicine, Healthcare Research Institute, Seoul National University Hospital Healthcare System Gangnam Center, Seoul, Korea
| | - Dongheon Lee
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, Korea
| | - Jung Ho Bae
- Department of Internal Medicine, Healthcare Research Institute, Seoul National University Hospital Healthcare System Gangnam Center, Seoul, Korea
| | - Hae Yeon Kang
- Department of Internal Medicine, Healthcare Research Institute, Seoul National University Hospital Healthcare System Gangnam Center, Seoul, Korea
| | - Min-Sun Kwak
- Department of Internal Medicine, Healthcare Research Institute, Seoul National University Hospital Healthcare System Gangnam Center, Seoul, Korea
| | - Ji Yeon Seo
- Department of Internal Medicine, Healthcare Research Institute, Seoul National University Hospital Healthcare System Gangnam Center, Seoul, Korea
| | - Jong In Yang
- Department of Internal Medicine, Healthcare Research Institute, Seoul National University Hospital Healthcare System Gangnam Center, Seoul, Korea
| | - Sun Young Yang
- Department of Internal Medicine, Healthcare Research Institute, Seoul National University Hospital Healthcare System Gangnam Center, Seoul, Korea
| | - Seon Hee Lim
- Department of Internal Medicine, Healthcare Research Institute, Seoul National University Hospital Healthcare System Gangnam Center, Seoul, Korea
| | - Jeong Yoon Yim
- Department of Internal Medicine, Healthcare Research Institute, Seoul National University Hospital Healthcare System Gangnam Center, Seoul, Korea
| | - Joo Hyun Lim
- Department of Internal Medicine, Healthcare Research Institute, Seoul National University Hospital Healthcare System Gangnam Center, Seoul, Korea
| | - Goh Eun Chung
- Department of Internal Medicine, Healthcare Research Institute, Seoul National University Hospital Healthcare System Gangnam Center, Seoul, Korea
| | - Su Jin Chung
- Department of Internal Medicine, Healthcare Research Institute, Seoul National University Hospital Healthcare System Gangnam Center, Seoul, Korea
| | - Ji Min Choi
- Department of Internal Medicine, Healthcare Research Institute, Seoul National University Hospital Healthcare System Gangnam Center, Seoul, Korea
| | - Yoo Min Han
- Department of Internal Medicine, Healthcare Research Institute, Seoul National University Hospital Healthcare System Gangnam Center, Seoul, Korea
| | - Seung Joo Kang
- Department of Internal Medicine, Healthcare Research Institute, Seoul National University Hospital Healthcare System Gangnam Center, Seoul, Korea
| | - Jooyoung Lee
- Department of Internal Medicine, Liver Research Institute, Seoul National University College of Medicine, Seoul, Korea
| | - Hee Chan Kim
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, Korea; Department of Biomedical Engineering College of Medicine, Seoul National University, Seoul, Korea; Institute of Medical & Biological Engineering, Medical Research Center, Seoul National University, Seoul, Korea.
| | - Joo Sung Kim
- Department of Internal Medicine, Healthcare Research Institute, Seoul National University Hospital Healthcare System Gangnam Center, Seoul, Korea; Department of Internal Medicine, Liver Research Institute, Seoul National University College of Medicine, Seoul, Korea.
| |
Collapse
|
19
|
Azer SA. Deep learning with convolutional neural networks for identification of liver masses and hepatocellular carcinoma: A systematic review. World J Gastrointest Oncol 2019; 11:1218-1230. [PMID: 31908726 PMCID: PMC6937442 DOI: 10.4251/wjgo.v11.i12.1218] [Citation(s) in RCA: 44] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/02/2019] [Revised: 07/09/2019] [Accepted: 10/03/2019] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Artificial intelligence, such as convolutional neural networks (CNNs), has been used in the interpretation of images and the diagnosis of hepatocellular cancer (HCC) and liver masses. CNN, a machine-learning algorithm similar to deep learning, has demonstrated its capability to recognise specific features that can detect pathological lesions. AIM To assess the use of CNNs in examining HCC and liver masses images in the diagnosis of cancer and evaluating the accuracy level of CNNs and their performance. METHODS The databases PubMed, EMBASE, and the Web of Science and research books were systematically searched using related keywords. Studies analysing pathological anatomy, cellular, and radiological images on HCC or liver masses using CNNs were identified according to the study protocol to detect cancer, differentiating cancer from other lesions, or staging the lesion. The data were extracted as per a predefined extraction. The accuracy level and performance of the CNNs in detecting cancer or early stages of cancer were analysed. The primary outcomes of the study were analysing the type of cancer or liver mass and identifying the type of images that showed optimum accuracy in cancer detection. RESULTS A total of 11 studies that met the selection criteria and were consistent with the aims of the study were identified. The studies demonstrated the ability to differentiate liver masses or differentiate HCC from other lesions (n = 6), HCC from cirrhosis or development of new tumours (n = 3), and HCC nuclei grading or segmentation (n = 2). The CNNs showed satisfactory levels of accuracy. The studies aimed at detecting lesions (n = 4), classification (n = 5), and segmentation (n = 2). Several methods were used to assess the accuracy of CNN models used. CONCLUSION The role of CNNs in analysing images and as tools in early detection of HCC or liver masses has been demonstrated in these studies. While a few limitations have been identified in these studies, overall there was an optimal level of accuracy of the CNNs used in segmentation and classification of liver cancers images.
Collapse
Affiliation(s)
- Samy A Azer
- Department of Medical Education, King Saud University College of Medicine, Riyadh 11461, Saudi Arabia
| |
Collapse
|
20
|
Feature extraction using traditional image processing and convolutional neural network methods to classify white blood cells: a study. AUSTRALASIAN PHYSICAL & ENGINEERING SCIENCES IN MEDICINE 2019; 42:627-638. [PMID: 30830652 DOI: 10.1007/s13246-019-00742-9] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2018] [Accepted: 02/25/2019] [Indexed: 12/28/2022]
Abstract
White blood cells play a vital role in monitoring health condition of a person. Change in count and/or appearance of these cells indicate hematological disorders. Manual microscopic evaluation of white blood cells is the gold standard method, but the result depends on skill and experience of the hematologist. In this paper we present a comparative study of feature extraction using two approaches for classification of white blood cells. In the first approach, features were extracted using traditional image processing method and in the second approach we employed AlexNet which is a pre-trained convolutional neural network as feature generator. We used neural network for classification of WBCs. The results demonstrate that, classification result is slightly better for the features extracted using the convolutional neural network approach compared to traditional image processing approach. The average accuracy and sensitivity of 99% was obtained for classification of white blood cells. Hence, any one of these methods can be used for classification of WBCs depending availability of data and required resources.
Collapse
|
21
|
Park C, Took CC, Seong JK. Machine learning in biomedical engineering. Biomed Eng Lett 2018; 8:1-3. [PMID: 30603186 PMCID: PMC6208556 DOI: 10.1007/s13534-018-0058-3] [Citation(s) in RCA: 56] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2018] [Revised: 01/22/2018] [Accepted: 01/22/2018] [Indexed: 10/18/2022] Open
Affiliation(s)
- Cheolsoo Park
- Department of Computer Engineering, Kwangwoon University, Nowon-gu, Seoul, Korea
| | - Clive Cheong Took
- Department of Computer Science, University of Surrey, Guildford, GU2 7XH UK
| | - Joon-Kyung Seong
- School of Biomedical Engineering, Korea University, 145, Anam-ro, Anam-dong 5-ga, Seongbuk-gu, Seoul, 02841 Korea
| |
Collapse
|