1
|
George AA, Tan JL, Kovoor JG, Lee A, Stretton B, Gupta AK, Bacchi S, George B, Singh R. Artificial intelligence in capsule endoscopy: development status and future expectations. MINI-INVASIVE SURGERY 2024. [DOI: 10.20517/2574-1225.2023.102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2025]
Abstract
In this review, we aim to illustrate the state-of-the-art artificial intelligence (AI) applications in the field of capsule endoscopy. AI has made significant strides in gastrointestinal imaging, particularly in capsule endoscopy - a non-invasive procedure for capturing gastrointestinal tract images. However, manual analysis of capsule endoscopy videos is labour-intensive and error-prone, prompting the development of automated computational algorithms and AI models. While currently serving as a supplementary observer, AI has the capacity to evolve into an autonomous, integrated reading system, potentially significantly reducing capsule reading time while surpassing human accuracy. We searched Embase, Pubmed, Medline, and Cochrane databases from inception to 06 Jul 2023 for studies investigating the use of AI for capsule endoscopy and screened retrieved records for eligibility. Quantitative and qualitative data were extracted and synthesised to identify current themes. In the search, 824 articles were collected, and 291 duplicates and 31 abstracts were deleted. After a double-screening process and full-text review, 106 publications were included in the review. Themes pertaining to AI for capsule endoscopy included active gastrointestinal bleeding, erosions and ulcers, vascular lesions and angiodysplasias, polyps and tumours, inflammatory bowel disease, coeliac disease, hookworms, bowel prep assessment, and multiple lesion detection. This review provides current insights into the impact of AI on capsule endoscopy as of 2023. AI holds the potential for faster and precise readings and the prospect of autonomous image analysis. However, careful consideration of diagnostic requirements and potential challenges is crucial. The untapped potential within vision transformer technology hints at further evolution and even greater patient benefit.
Collapse
|
2
|
Chu Y, Huang F, Gao M, Zou DW, Zhong J, Wu W, Wang Q, Shen XN, Gong TT, Li YY, Wang LF. Convolutional neural network-based segmentation network applied to image recognition of angiodysplasias lesion under capsule endoscopy. World J Gastroenterol 2023; 29:879-889. [PMID: 36816625 PMCID: PMC9932427 DOI: 10.3748/wjg.v29.i5.879] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 11/26/2022] [Accepted: 01/12/2023] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Small intestinal vascular malformations (angiodysplasias) are common causes of small intestinal bleeding. While capsule endoscopy has become the primary diagnostic method for angiodysplasia, manual reading of the entire gastrointestinal tract is time-consuming and requires a heavy workload, which affects the accuracy of diagnosis.
AIM To evaluate whether artificial intelligence can assist the diagnosis and increase the detection rate of angiodysplasias in the small intestine, achieve automatic disease detection, and shorten the capsule endoscopy (CE) reading time.
METHODS A convolutional neural network semantic segmentation model with a feature fusion method, which automatically recognizes the category of vascular dysplasia under CE and draws the lesion contour, thus improving the efficiency and accuracy of identifying small intestinal vascular malformation lesions, was proposed. Resnet-50 was used as the skeleton network to design the fusion mechanism, fuse the shallow and depth features, and classify the images at the pixel level to achieve the segmentation and recognition of vascular dysplasia. The training set and test set were constructed and compared with PSPNet, Deeplab3+, and UperNet.
RESULTS The test set constructed in the study achieved satisfactory results, where pixel accuracy was 99%, mean intersection over union was 0.69, negative predictive value was 98.74%, and positive predictive value was 94.27%. The model parameter was 46.38 M, the float calculation was 467.2 G, and the time length to segment and recognize a picture was 0.6 s.
CONCLUSION Constructing a segmentation network based on deep learning to segment and recognize angiodysplasias lesions is an effective and feasible method for diagnosing angiodysplasias lesions.
Collapse
Affiliation(s)
- Ye Chu
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Fang Huang
- Technology Platform Department, Jinshan Science & Technology (Group) Co., Ltd., Chongqing 401120, China
| | - Min Gao
- Technology Platform Department, Jinshan Science & Technology (Group) Co., Ltd., Chongqing 401120, China
| | - Duo-Wu Zou
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Jie Zhong
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Wei Wu
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Qi Wang
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Xiao-Nan Shen
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Ting-Ting Gong
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| | - Yuan-Yi Li
- Technology Platform Department, Jinshan Science & Technology (Group) Co., Ltd., Chongqing 401120, China
| | - Li-Fu Wang
- Department of Gastroenterology, Shanghai Jiao Tong University School of Medicine, Ruijin Hospital, Shanghai 200025, China
| |
Collapse
|
3
|
|
4
|
Artificial Intelligence in Gastroenterology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-58080-3_163-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
5
|
Alam MJ, Rashid RB, Fattah SA, Saquib M. RAt-CapsNet: A Deep Learning Network Utilizing Attention and Regional Information for Abnormality Detection in Wireless Capsule Endoscopy. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2022; 10:3300108. [PMID: 36032311 PMCID: PMC9401095 DOI: 10.1109/jtehm.2022.3198819] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 07/03/2022] [Accepted: 08/10/2022] [Indexed: 11/05/2022]
Affiliation(s)
- Md. Jahin Alam
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka, Bangladesh
| | - Rifat Bin Rashid
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka, Bangladesh
| | - Shaikh Anowarul Fattah
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka, Bangladesh
| | - Mohammad Saquib
- Department of Electrical Engineering, The University of Texas at Dallas, Richardson, TX, USA
| |
Collapse
|
6
|
Strümke I, Hicks SA, Thambawita V, Jha D, Parasa S, Riegler MA, Halvorsen P. Artificial Intelligence in Gastroenterology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
7
|
Amiri Z, Hassanpour H, Beghdadi A. A Computer-Aided Method for Digestive System Abnormality Detection in WCE Images. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:7863113. [PMID: 34707798 PMCID: PMC8545542 DOI: 10.1155/2021/7863113] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 09/25/2021] [Accepted: 10/06/2021] [Indexed: 12/01/2022]
Abstract
Wireless capsule endoscopy (WCE) is a powerful tool for the diagnosis of gastrointestinal diseases. The output of this tool is in video with a length of about eight hours, containing about 8000 frames. It is a difficult task for a physician to review all of the video frames. In this paper, a new abnormality detection system for WCE images is proposed. The proposed system has four main steps: (1) preprocessing, (2) region of interest (ROI) extraction, (3) feature extraction, and (4) classification. In ROI extraction, at first, distinct areas are highlighted and nondistinct areas are faded by using the joint normal distribution; then, distinct areas are extracted as an ROI segment by considering a threshold. The main idea is to extract abnormal areas in each frame. Therefore, it can be used to extract various lesions in WCE images. In the feature extraction step, three different types of features (color, texture, and shape) are employed. Finally, the features are classified using the support vector machine. The proposed system was tested on the Kvasir-Capsule dataset. The proposed system can detect multiple lesions from WCE frames with high accuracy.
Collapse
Affiliation(s)
- Zahra Amiri
- Image Processing and Data Mining Lab, Shahrood University of Technology, Shahrood, Iran
| | - Hamid Hassanpour
- Image Processing and Data Mining Lab, Shahrood University of Technology, Shahrood, Iran
| | - Azeddine Beghdadi
- Department of Computer Science and Engineering, University Sorbonne Paris Nord, Villetaneuse, France
| |
Collapse
|
8
|
Jain S, Seal A, Ojha A, Yazidi A, Bures J, Tacheci I, Krejcar O. A deep CNN model for anomaly detection and localization in wireless capsule endoscopy images. Comput Biol Med 2021; 137:104789. [PMID: 34455302 DOI: 10.1016/j.compbiomed.2021.104789] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Revised: 08/18/2021] [Accepted: 08/18/2021] [Indexed: 12/22/2022]
Abstract
Wireless capsule endoscopy (WCE) is one of the most efficient methods for the examination of gastrointestinal tracts. Computer-aided intelligent diagnostic tools alleviate the challenges faced during manual inspection of long WCE videos. Several approaches have been proposed in the literature for the automatic detection and localization of anomalies in WCE images. Some of them focus on specific anomalies such as bleeding, polyp, lesion, etc. However, relatively fewer generic methods have been proposed to detect all those common anomalies simultaneously. In this paper, a deep convolutional neural network (CNN) based model 'WCENet' is proposed for anomaly detection and localization in WCE images. The model works in two phases. In the first phase, a simple and efficient attention-based CNN classifies an image into one of the four categories: polyp, vascular, inflammatory, or normal. If the image is classified in one of the abnormal categories, it is processed in the second phase for the anomaly localization. Fusion of Grad-CAM++ and a custom SegNet is used for anomalous region segmentation in the abnormal image. WCENet classifier attains accuracy and area under receiver operating characteristic of 98% and 99%. The WCENet segmentation model obtains a frequency weighted intersection over union of 81%, and an average dice score of 56% on the KID dataset. WCENet outperforms nine different state-of-the-art conventional machine learning and deep learning models on the KID dataset. The proposed model demonstrates potential for clinical applications.
Collapse
Affiliation(s)
- Samir Jain
- PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur, 482005, India
| | - Ayan Seal
- PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur, 482005, India.
| | - Aparajita Ojha
- PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur, 482005, India
| | - Anis Yazidi
- Department of Computer Science, OsloMet - Oslo Metropolitan University, Oslo, Norway; Department of Plastic and Reconstructive Surgery, Oslo University Hospital, Oslo, Norway; Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway
| | - Jan Bures
- Second Department of Internal Medicine-Gastroenterology, Charles University, Faculty of Medicine in Hradec Kralove and University Hospital Hradec Kralove, Sokolska 581, Hradec Kralove, 50005, Czech Republic
| | - Ilja Tacheci
- Second Department of Internal Medicine-Gastroenterology, Charles University, Faculty of Medicine in Hradec Kralove and University Hospital Hradec Kralove, Sokolska 581, Hradec Kralove, 50005, Czech Republic
| | - Ondrej Krejcar
- Center for Basic and Applied Research, Faculty of Informatics and Management, University of Hradec Kralove, Hradecka 1249, Hradec Kralove, 50003, Czech Republic; Malaysia Japan International Institute of Technology, Universiti Teknologi Malaysia, Jalan Sultan Yahya Petra, 54100, Kuala Lumpur, Malaysia
| |
Collapse
|
9
|
Rathnamala S, Jenicka S. Automated bleeding detection in wireless capsule endoscopy images based on color feature extraction from Gaussian mixture model superpixels. Med Biol Eng Comput 2021; 59:969-987. [PMID: 33837919 DOI: 10.1007/s11517-021-02352-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Accepted: 03/19/2021] [Indexed: 12/22/2022]
Abstract
Wireless capsule endoscopy is the commonly employed modality in the treatment of gastrointestinal tract pathologies. However, the time taken for interpretation of these images is very high due to the large volume of images generated. Automated detection of disorders with these images can facilitate faster clinical interventions. In this paper, we propose an automated system based on Gaussian mixture model superpixels for bleeding detection and segmentation of candidate regions. The proposed system is realized with a classic binary support vector machine classifier trained with seven features including color and texture attributes extracted from the Gaussian mixture model superpixels of the WCE images. On detection of bleeding images, bleeding regions are segmented from them, by incrementally grouping the superpixels based on deltaE color differences. Tested with standard datasets, this system exhibits best performance compared to the state-of-the-art approaches with respect to classification accuracy, feature selection, computational time, and segmentation accuracy. The proposed system achieves 99.88% accuracy, 99.83% sensitivity, and 100% specificity signifying the effectiveness of the proposed system in bleeding detection with very few classification errors.
Collapse
Affiliation(s)
- S Rathnamala
- Department of Information Technology, Sethu Institute of Technology, Virudhunagar District, Kariapatti, Tamil Nadu, 626115, India.
| | - S Jenicka
- Department of CSE, Sethu Institute of Technology, Virudhunagar District, Kariapatti, Tamil Nadu, 626115, India
| |
Collapse
|
10
|
Ghosh T, Chakareski J. Deep Transfer Learning for Automated Intestinal Bleeding Detection in Capsule Endoscopy Imaging. J Digit Imaging 2021; 34:404-417. [PMID: 33728563 PMCID: PMC8290011 DOI: 10.1007/s10278-021-00428-3] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2020] [Revised: 03/04/2020] [Accepted: 01/18/2021] [Indexed: 12/21/2022] Open
Abstract
PURPOSE The objective of this paper was to develop a computer-aided diagnostic (CAD) tools for automated analysis of capsule endoscopic (CE) images, more precisely, detect small intestinal abnormalities like bleeding. METHODS In particular, we explore a convolutional neural network (CNN)-based deep learning framework to identify bleeding and non-bleeding CE images, where a pre-trained AlexNet neural network is used to train a transfer learning CNN that carries out the identification. Moreover, bleeding zones in a bleeding-identified image are also delineated using deep learning-based semantic segmentation that leverages a SegNet deep neural network. RESULTS To evaluate the performance of the proposed framework, we carry out experiments on two publicly available clinical datasets and achieve a 98.49% and 88.39% F1 score, respectively, on the capsule endoscopy.org and KID datasets. For bleeding zone identification, 94.42% global accuracy and 90.69% weighted intersection over union (IoU) are achieved. CONCLUSION Finally, our performance results are compared to other recently developed state-of-the-art methods, and consistent performance advances are demonstrated in terms of performance measures for bleeding image and bleeding zone detection. Relative to the present and established practice of manual inspection and annotation of CE images by a physician, our framework enables considerable annotation time and human labor savings in bleeding detection in CE images, while providing the additional benefits of bleeding zone delineation and increased detection accuracy. Moreover, the overall cost of CE enabled by our framework will also be much lower due to the reduction of manual labor, which can make CE affordable for a larger population.
Collapse
Affiliation(s)
- Tonmoy Ghosh
- Department of Electrical and Computer Engineering, University of Alabama, Alabama, 35401, Tuscaloosa, USA.
| | - Jacob Chakareski
- Department of Informatics, College of Computing , New Jersey Institute of Technology, Newark, 07103, New Jersey, USA
| |
Collapse
|
11
|
Caroppo A, Leone A, Siciliano P. Deep transfer learning approaches for bleeding detection in endoscopy images. Comput Med Imaging Graph 2021; 88:101852. [PMID: 33493998 DOI: 10.1016/j.compmedimag.2020.101852] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Revised: 12/17/2020] [Accepted: 12/18/2020] [Indexed: 12/17/2022]
Abstract
Wireless capsule endoscopy is a non-invasive, wireless imaging tool that has developed rapidly over the last several years. One of the main limiting factors using this technology is that it produces a huge number of images, whose analysis, to be done by a doctor, is an extremely time-consuming process. In this research area, the management of this problem has been addressed with the development of Computer-aided Diagnosis systems thanks to which the automatic inspection and analysis of images acquired by the capsule has clearly improved. Recently, a big advance in classification of endoscopic images is achieved with the emergence of deep learning methods. The proposed expert system employs three pre-trained deep convolutional neural networks for feature extraction. In order to construct efficient feature sets, the features from VGG19, InceptionV3 and ResNet50 models are then selected and fused using the minimum Redundancy Maximum Relevance method and different fusion rules. Finally, supervised machine learning algorithms are employed to classify the images using the extracted features into two categories: bleeding and nonbleeding images. For performance evaluation a series of experiments are performed on two standard benchmark datasets. It has been observed that the proposed architecture outclass the single deep learning architectures, with an average accuracy in detection bleeding regions of 97.65 % and 95.70 % on well-known state-of-the-art datasets considering three different fusion rules, with the best combination in terms of accuracy and training time obtained using mean value pooling as fusion rule and Support Vector Machine as classifier.
Collapse
Affiliation(s)
- Andrea Caroppo
- Institute for Microelectronics and Microsystems, National Research Council of Italy, Lecce 73100, Italy.
| | - Alessandro Leone
- Institute for Microelectronics and Microsystems, National Research Council of Italy, Lecce 73100, Italy.
| | - Pietro Siciliano
- Institute for Microelectronics and Microsystems, National Research Council of Italy, Lecce 73100, Italy.
| |
Collapse
|
12
|
Artificial Intelligence in Medicine. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_163-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
13
|
Jain S, Seal A, Ojha A, Krejcar O, Bureš J, Tachecí I, Yazidi A. Detection of abnormality in wireless capsule endoscopy images using fractal features. Comput Biol Med 2020; 127:104094. [PMID: 33152668 DOI: 10.1016/j.compbiomed.2020.104094] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2020] [Revised: 10/23/2020] [Accepted: 10/23/2020] [Indexed: 12/14/2022]
Abstract
One of the most recent non-invasive technologies to examine the gastrointestinal tract is wireless capsule endoscopy (WCE). As there are thousands of endoscopic images in an 8-15 h long video, an evaluator has to pay constant attention for a relatively long time (60-120 min). Therefore the possibility of the presence of pathological findings in a few images (displayed for evaluation for a few seconds only) brings a significant risk of missing the pathology with all negative consequences for the patient. Hence, manually reviewing a video to identify abnormal images is not only a tedious and time consuming task that overwhelms human attention but also is error prone. In this paper, a method is proposed for the automatic detection of abnormal WCE images. The differential box counting method is used for the extraction of fractal dimension (FD) of WCE images and the random forest based ensemble classifier is used for the identification of abnormal frames. The FD is a well-known technique for extraction of features related to texture, smoothness, and roughness. In this paper, FDs are extracted from pixel-blocks of WCE images and are fed to the classifier for identification of images with abnormalities. To determine a suitable pixel block size for FD feature extraction, various sizes of blocks are considered and are fed into six frequently used classifiers separately, and the block size of 7×7 giving the best performance is empirically determined. Further, the selection of the random forest ensemble classifier is also done using the same empirical study. Performance of the proposed method is evaluated on two datasets containing WCE frames. Results demonstrate that the proposed method outperforms some of the state-of-the-art methods with AUC of 85% and 99% on Dataset-I and Dataset-II respectively.
Collapse
Affiliation(s)
- Samir Jain
- PDPM Indian Institute of Information Technology Design and Manufacturing, Jabalpur, 482005, India
| | - Ayan Seal
- PDPM Indian Institute of Information Technology Design and Manufacturing, Jabalpur, 482005, India; Center for Basic and Applied Research, Faculty of Informatics and Management, University of Hradec Kralove, Hradecka, 1249, Hradec Kralove, 50003, Czech Republic.
| | - Aparajita Ojha
- PDPM Indian Institute of Information Technology Design and Manufacturing, Jabalpur, 482005, India
| | - Ondrej Krejcar
- Center for Basic and Applied Research, Faculty of Informatics and Management, University of Hradec Kralove, Hradecka, 1249, Hradec Kralove, 50003, Czech Republic; Malaysia Japan International Institute of Technology, Universiti Teknologi Malaysia, Jalan Sultan Yahya Petra, 54100, Kuala Lumpur, Malaysia
| | - Jan Bureš
- Second Department of Internal Medicine-Gastroenterology, Charles University, Faculty of Medicine in Hradec Kralove, University Hospital Hradec Kralove, Sokolska 581, Hradec Kralove, 50005, Czech Republic
| | - Ilja Tachecí
- Second Department of Internal Medicine-Gastroenterology, Charles University, Faculty of Medicine in Hradec Kralove, University Hospital Hradec Kralove, Sokolska 581, Hradec Kralove, 50005, Czech Republic
| | - Anis Yazidi
- Artificial Intelligence Lab, Oslo Metropolitan University, 460167, Norway
| |
Collapse
|
14
|
Rahim T, Usman MA, Shin SY. A survey on contemporary computer-aided tumor, polyp, and ulcer detection methods in wireless capsule endoscopy imaging. Comput Med Imaging Graph 2020; 85:101767. [DOI: 10.1016/j.compmedimag.2020.101767] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Revised: 07/13/2020] [Accepted: 07/18/2020] [Indexed: 12/12/2022]
|
15
|
Kundu AK, Fattah SA, Wahid KA. Multiple Linear Discriminant Models for Extracting Salient Characteristic Patterns in Capsule Endoscopy Images for Multi-Disease Detection. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2020; 8:3300111. [PMID: 32190429 PMCID: PMC7062148 DOI: 10.1109/jtehm.2020.2964666] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2019] [Revised: 11/05/2019] [Accepted: 12/03/2019] [Indexed: 01/01/2023]
Abstract
Background: Computer-aided disease detection schemes from wireless capsule endoscopy (WCE) videos have received great attention by the researchers for reducing physicians’ burden due to the time-consuming and risky manual review process. While single disease classification schemes are greatly dealt by the researchers in the past, developing a unified scheme which is capable of detecting multiple gastrointestinal (GI) diseases is very challenging due to the highly irregular behavior of diseased images in terms of color patterns. Method: In this paper, a computer-aided method is developed to detect multiple GI diseases from WCE videos utilizing linear discriminant analysis (LDA) based region of interest (ROI) separation scheme followed by a probabilistic model fitting approach. Commonly in training phase, as pixel-labeled images are available in small number, only the image-level annotations are used for detecting diseases in WCE images, whereas pixel-level knowledge, although a major source for learning the disease characteristics, is left unused. In view of learning the characteristic disease patterns from pixel-labeled images, a set of LDA models are trained which are later used to extract the salient ROI from WCE images both in training and testing stages. The intensity patterns of ROI are then modeled by a suitable probability distribution and the fitted parameters of the distribution are utilized as features in a supervised cascaded classification scheme. Results: For the purpose of validation of the proposed multi-disease detection scheme, a set of pixel-labeled images of bleeding, ulcer and tumor are used to extract the LDA models and then, a large WCE dataset is used for training and testing. A high level of accuracy is achieved even with a small number of pixel-labeled images. Conclusion: Therefore, the proposed scheme is expected to help physicians in reviewing a large number of WCE images to diagnose different GI diseases.
Collapse
Affiliation(s)
- Amit Kumar Kundu
- 1Department of Electrical and Electronic EngineeringBangladesh University of Engineering and TechnologyDhaka1205Bangladesh
| | - Shaikh Anowarul Fattah
- 1Department of Electrical and Electronic EngineeringBangladesh University of Engineering and TechnologyDhaka1205Bangladesh
| | - Khan A Wahid
- 2Department of Electrical and Computer EngineeringUniversity of SaskatchewanSaskatoonSKS7N 5A9Canada
| |
Collapse
|
16
|
Quantitative Analysis of Melanosis Coli Colonic Mucosa Using Textural Patterns. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10010404] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Melanosis coli (MC) is a disease related to long-term use of anthranoid laxative agents. Patients with clinical constipation or obesity are more likely to use these drugs for long periods. Moreover, patients with MC are more likely to develop polyps, particularly adenomatous polyps. Adenomatous polyps can transform to colorectal cancer. Recognizing multiple polyps from MC is challenging due to their heterogeneity. Therefore, this study proposed a quantitative assessment of MC colonic mucosa with texture patterns. In total, the MC colonoscopy images of 1092 person-times were included in this study. At the beginning, the correlations among carcinoembryonic antigens, polyp texture, and pathology were analyzed. Then, 181 patients with MC were extracted for further analysis while patients having unclear images were excluded. By gray-level co-occurrence matrix, texture patterns in the colorectal images were extracted. Pearson correlation analysis indicated five texture features were significantly correlated with pathological results (p < 0.001). This result should be used in the future to design an instant help software to help the physician. The information of colonoscopy and image analystic data can provide clinicians with suggestions for assessing patients with MC.
Collapse
|
17
|
Kundu AK, Fattah SA. Probability density function based modeling of spatial feature variation in capsule endoscopy data for automatic bleeding detection. Comput Biol Med 2019; 115:103478. [PMID: 31698239 DOI: 10.1016/j.compbiomed.2019.103478] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2019] [Revised: 09/30/2019] [Accepted: 09/30/2019] [Indexed: 02/07/2023]
Abstract
Wireless capsule endoscopy (WCE) is a video technology to inspect abnormalities, like bleeding in the gastrointestinal tract. In order to avoid a complex and long duration manual review process, automatic bleeding detection schemes are developed that mainly utilize features extracted from WCE images. In feature-based bleeding detection schemes, either global features are used which produce averaged characteristics ignoring the effect of smaller bleeding regions or local features are utilized that cause large feature dimension. In this paper, pixels of interest (POI) in a given WCE image are determined using a linear separation scheme, local spatial features are then extracted from the POI and finally, a suitable characteristic probability density function (PDF) is fitted over the resulting feature space. The proposed PDF model fitting based approach not only reduces the computational complexity but also offers more consistent representation of a class. Details analysis are carried out to find the best suitable PDF and it is found that fitting of Rayleigh PDF model to the local spatial features is best suited for bleeding detection. For the purpose of classification, the fitted PDF parameters are used as features in the supervised support vector machine classifier. Pixels residing in the close vicinity of the POI are further classified with the help of an unsupervised clustering-based scheme to extract more precise bleeding regions. A large number of WCE images obtained from 30 publicly available WCE videos are used for performance evaluation of the proposed scheme and the effects on classification performance due to the changes in PDF models, block statistics, color spaces, and classifiers are experimentally analyzed. The proposed scheme shows satisfactory performance in terms of sensitivity (97.55%), specificity (96.59%) and accuracy (96.77%) and the results obtained by the proposed method outperforms the results reported for some state-of-the-art methods.
Collapse
Affiliation(s)
- Amit Kumar Kundu
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Bangladesh.
| | - Shaikh Anowarul Fattah
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Bangladesh.
| |
Collapse
|