Minireviews Open Access
Copyright ©The Author(s) 2022. Published by Baishideng Publishing Group Inc. All rights reserved.
World J Gastrointest Oncol. May 15, 2022; 14(5): 989-1001
Published online May 15, 2022. doi: 10.4251/wjgo.v14.i5.989
Scoping out the future: The application of artificial intelligence to gastrointestinal endoscopy
Scott B Minchenberg, Trent Walradt, Department of Internal Medicine, Beth Israel Deaconess Medical Center, Boston, MA 02130, United States
Jeremy R Glissen Brown, Division of Gastroenterology, Beth Israel Deaconess Medical Center, Boston, MA 02130, United States
ORCID number: Scott B Minchenberg (0000-0002-0445-5956); Trent Walradt (0000-0003-1081-0923); Jeremy R Glissen Brown (0000-0002-7204-7241).
Author contributions: Minchenberg SB and Walradt T contributed equally to the work and should be listed as co-first authors; Minchenberg SB, Walradt T, and Glissen Brown JR contributed to manuscript concept and layout; Minchenberg SB and Walradt T contributed to drafting of the manuscript; All authors read and approved the final manuscript.
Conflict-of-interest statement: All authors disclose no financial relationships relevant to this publication.
Open-Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: https://creativecommons.org/Licenses/by-nc/4.0/
Corresponding author: Jeremy R Glissen Brown, MD, Academic Fellow, Division of Gastroenterology, Beth Israel Deaconess Medical Center, 330 Brookline Avenue, Boston, MA 02130, United States. jglissen@bidmc.harvard.edu
Received: February 26, 2021
Peer-review started: February 26, 2021
First decision: May 3, 2021
Revised: June 21, 2021
Accepted: April 20, 2022
Article in press: April 20, 2022
Published online: May 15, 2022

Abstract

Artificial intelligence (AI) is a quickly expanding field in gastrointestinal endoscopy. Although there are a myriad of applications of AI ranging from identification of bleeding to predicting outcomes in patients with inflammatory bowel disease, a great deal of research has focused on the identification and classification of gastrointestinal malignancies. Several of the initial randomized, prospective trials utilizing AI in clinical medicine have centered on polyp detection during screening colonoscopy. In addition to work focused on colorectal cancer, AI systems have also been applied to gastric, esophageal, pancreatic, and liver cancers. Despite promising results in initial studies, the generalizability of most of these AI systems have not yet been evaluated. In this article we review recent developments in the field of AI applied to gastrointestinal oncology.

Key Words: Artificial intelligence, Oncology, Gastroenterology, Endoscopy, Machine learning, Computer-assisted decision making, Computer-aided detection, Computer-aided diagnosis

Core Tip: Artificial intelligence (AI) technologies have become a topic of intense investigation in clinical medicine. In gastrointestinal oncology AI has been employed in multiple areas, with notable progress seen in computer-aided detection and computer-aided diagnosis. Most efforts have focused on colorectal cancer, but AI systems have also been developed for malignancies involving the esophagus, stomach, pancreas and liver. Although studies in this field have demonstrated excellent diagnostic characteristics, many have limited external validity. This article will review the current evidence for AI technologies applied to the detection and diagnosis of gastrointestinal malignancies.



INTRODUCTION

The first documented gastrointestinal (GI) endoscopic procedure was performed by Dr. Adolph Kussmaul in the 19th century using a modified Desormeaux device illuminated by a gasoline lamp with reflective mirrors[1]. Since the 1800s, there have been remarkable technological advancements in the field of endoscopy allowing for diagnostic and therapeutic interventions ranging from early detection of cancerous lesions to the treatment of life-threatening gastrointestinal bleeding. Mastering endoscopic techniques takes years of training followed by decades of experience. Even among experts, however, there is still considerable interprovider variability and room for improvement in the detection rate of gastrointestinal malignancies.

Artificial intelligence (AI) represents an attractive solution to these issues. Over the past two decades, numerous systems have been developed for computer-aided detection (CADe) and computer-aided diagnosis (CADx) of gastrointestinal lesions. Furthermore, some of the first prospective, randomized trials applying AI in clinical medicine have evaluated CADe for colorectal polyps[2]. Additional randomized trials are underway evaluating a broad spectrum of AI technologies in GI oncology. As products become commercially available, it will be important for gastroenterologists to familiarize themselves with technologies and the data supporting them.

DEFINITIONS

AI refers to technology designed to mimic human intelligence. A subset of AI is machine learning, a technique in which computers use data to improve their performance without explicit instruction. The majority of AI systems studied in GI oncology are based off two major approaches: traditional machine learning and deep neural networks.

Traditional machine learning is based on a set of algorithms that require a significant amount of input in order to make a particular decision. Much of the learning for traditional machine learning is based on pattern recognition relating to features such as color, texture, intensity, and shape. Many studies utilizing traditional machine learning implemented support vector machines (SVM) or a modified form of SVM. The crux of SVM is based on identifying hyperplanes allowing for the separation of data points. Initially this method was selected because of its high ratio of accuracy to computational power, allowing for application in real time. As technology pushed forth in the 21st century, various groups began exploring the use of deep neural networks, in many cases convolutional neural networks (CNN), for the detection and diagnosis of concerning lesions. Deep neural networks function by extracting data via a series of filters that is then processed by a neural network while preserving spatial and temporal features. This allows for dynamic learning while the algorithm extracts clinically relevant data.

Most machine learning models have several settings defined by the developer known as hyperparameters. These parameters are used to optimize the performance of the model. They are generally classified as model hyperparameters (e.g., number of layers in a neural network) and training hyperparameters (e.g., learning rate).

When developing a machine learning model, data is divided into training, validation and test datasets. The training dataset is used to create the model. The validation dataset is used to optimize hyperparameters and evaluate for overfitting. The test dataset is used to evaluate the performance of the model.

Preprocessing refers to the methods applied to images prior to analysis by the machine learning model. Techniques include histogram equalization to adjust contrast and gaussian filtering to remove noise. Transformation of the images can be achieved via resizing and processing through multiple layers, where deeper layers typically contain an increasing number of dimensions.

Data augmentation is a process to artificially enlarge a dataset when developing an AI algorithm. It is typically performed via rotation, flipping, shear, and zoom of the original data, thus expanding the amount of data in the training dataset.

Trials applying AI in GI oncology typically report the following metrics: sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), accuracy, precision and area under the receiver operating characteristic curve (AuROC). In order to measure the performance of a detection method or segmentation task, the intersection over union (IoU) can be calculated by dividing the area of overlap (overlap of prediction label and ground-truth labels) by the area of union (area of both the predicted and ground-truth labels). The IoU varies from study to study, and a predetermined threshold is typically set to determine true positive (TP) and false positive (FP). Often an IoU ≥ 0.25-0.5 defines a true positive (TP) and an IoU < 0.25-0.5 is considered a false positive (FP). Many prospective studies use a clinical definition of true positive as the number of correctly identified lesions by either AI or endoscopists. Using the discussed parameters, various AI-based approaches for the detection of GI cancers can be compared.

COLONOSCOPY

Globally, colorectal cancer (CRC) is the third most commonly diagnosed cancer and the fourth leading cause of death[3]. Colonoscopy has been associated with a decrease in the incidence and mortality of CRC through the detection and removal of precancerous polyps[4,5]. Adenoma detection rate (ADR) is often used as a gold standard metric for colonoscopy quality, and studies have shown that ADR may be inversely proportional to the rate of interval CRC after colonoscopy[6]. Studies have also shown, however, that roughly one fifth of adenomas are missed, even by expert endoscopists[7]. Evidence suggests that unrecognized polyps that appear within the endoscopic field of view are an important contributor to this problem. For instance, Aslanian et al[8] demonstrated that nurse observation during colonoscopy resulted in a trend towards improvement in the ADR. In addition, Marcondes et al[9] demonstrated that the ADR declines at the end of the day, suggesting endoscopist factors such as fatigue may play a role in polyp detection. Several CADe systems based on traditional machine learning techniques or deep learning have been designed as an attempt to combat these problems, serve as a safety net or “second set of eyes” during colonoscopy, and thus augment ADR.

Once polyps are identified, polyp characterization is the next crucial step. Optical biopsy refers to the use of endoscopy to predict histology in vivo. The successful application of optical biopsy to polyps would reduce costs associated with pathologic assessment and prevent unnecessary polypectomies. Computer-based optical biopsy also has the potential to level the playing field for advanced endoscopic techniques such as endocytoscopy (a specialized endoscopic imaging modality that allows for ultra-high level of magnification during live endoscopy) and allow providers to use these techniques with less interprovider variability. The American Society of Gastrointestinal Endoscopy Preservation and Incorporation of Valuable Endoscopic Innovations (PIVI) proposed standards for “resect and discard” (≥ 90% agreement with histopathology for post-polypectomy surveillance intervals) and “diagnose-and-leave” (≥ 90% NPV for adenomatous histology) strategies for diminutive polyps[10]. A systematic review and meta-analysis revealed that optical biopsy using narrow-band imaging (NBI) met the PIVI-2 threshold for the “diagnose-and-leave” strategy, but only in the sub-group of expert endoscopists[11]. Not surprisingly, multiple CADx systems for the characterization of colorectal polyps have been developed to capitalize on the promises of optical biopsy and overcome the limitations of current technologies.

CADe

Perhaps the most well-studied application of AI in gastroenterology is polyp detection (Figure 1). Researchers in this field initially developed methods that recognized manually extracted polyp features such as shape, color and texture[12]. These early efforts were based on the analysis of static endoscopic images or video frames[12,13]. The most recent technologies employ deep-learning algorithms that are capable of detecting polyps in real-time[14,15]. There are now commercially available AI-based polyp detection technologies available in the United States, Europe and Asia[16-18].

Figure 1
Figure 1 Example output from a computer-aided detection system using white light endoscopy (Fujifilm Corp., Tokyo). When a lesion is detected the endoscopist is notified by a hollow, bounded box. Used with the permission of Fujifilm.

Several prospective, randomized trials have been performed that have examined the efficacy of applying CADe to colonoscopy using deep learning methods (Table 1)[2,19-24]. Mohan et al[25] performed a meta-analysis, including 6 of these trials with a pooled patient population of 4962 patients. They found that ADR was significantly higher when using CADe assisted colonoscopy compared with standard colonoscopy [relative risk = 1.5, 95% confidence interval (CI): 1.3-1.72; P < 0.0001]. Colonoscopy withdrawal time was slighter greater in the CADe assisted group (mean difference = 0.38 minutes, 95%CI: 0.05-0.72; P = 0.02).

Table 1 Characteristics of randomized trials applying computer-aided detection to colonoscopy.
Ref.
Training/validation datasets
Testing datasets
AI system
ADR with AI (%)
ADR without AI (%)
Withdrawal time with AI (min)
Withdrawal time without AI (min)
Wang et al[2], 20195545 images from 1290 colonoscopy videos performed in China. Images were labeled by endoscopists. Training: 4495 images. Validation: 1050 images. CVC-ClinicDb: 612 image frames of polyps from 29 colonoscopy videos performed in Spain. Polyp location manually annotated by endoscopists. 27113 images from 1138 colonoscopy videos performed in China. 20% contained histologically confirmed polyps. Videos of 138 histologically confirmed polyps from 110 patients in China. 54 full-length colonoscopy videos from 54 patients in China. CNN based on SegNet architecture.29206.96.4
Wang et al[23], 202034287.57.0
Liu et al[24], 202029216.66.7
Repici et al[19], 2020Based on data from previous clinical trial[74]. Videos of 2684 histologically confirmed polyps from 840 patients in Europe and the US. Training and validation: 2346 polyps from 735 patients. Testing: 338 polyps from 105 patients. GI-Genius, Medtronic; CNN, details not available.55407.07.3
Gong et al[20], 2020All images were obtained from colonoscopies of > 5000 patients in China. Trained 3 DCNNs on still images: DCNN 1: 3264 in-vitro, 10180 in-vivo, and 4230 unqualified images used to train the system to determine whether a scope was inside or outside the body. 1000 images per category used for testing. DCNN 2: 5189 images of the cecum and 5630 non-cecum images used to train the system to identify the cecum. 500 images per category used for testing. DCNN 3: 2602 clear images, 1877 images in cleansing process, and 1899 blurry images used to train the system to recognize slipping. 200 images per category used for testing. k-fold cross-validation procedure was implemented with k = 10. DCNN 1-3 trained and tested in four independent convolutional neural networks: VGG16[75], DenseNet-169[76], ResNet-50[77], Inception-v3[78]. 1686.44.8
Liu et al[21], 2020151 videos containing endoscopist-confirmed polyps and 384 polyp-negative videos from colonoscopies in China. Training and validation: 101 polyp-positive cases and 300 polyp-negative cases. Testing: 46 polyp-positive cases and 88 polyp-negative cases. CADe system, Henan Xuanweitang Medical Information Technology; 3-dimensional CNN.39246.86.7
Su et al[22], 202023612 images from colonoscopies of > 4000 patients in China. Images were labeled by 2 endoscopists. Training: 15951. Validation: 3681. Testing: 3980. 5 DCNN models were created to time the withdrawal phase, supervise withdrawal stability, evaluate bowel preparation, and detect colorectal polyps in real time. Model B, based on AlexNet architecture[79]. BP based on ZFNet[80] and Model PD YOLO V2[81]. Model E developed using a DCNN with one fully connected layer.29177.05.7

Although these findings are promising, these trials have several limitations. First, the augmented ADR seen in these trials was largely driven by improved detection of diminutive adenomas (size < 5 mm), the clinical benefit of which remains an area of active debate[26]. Secondly, only one trial was double-blinded[23]. In the single-blind trials, being observed may have facilitated a “competitive spirit” or Hawthorne effect in provider participants, leading to improved inspection techniques[8]. Third, all but one of these trials were performed at a single center[19]. Thus, the results of these studies may not be broadly generalizable. Given these promises and limitations, the European Society of Gastrointestinal Endoscopy published guidelines in 2019 suggesting “the possible incorporation of computer aided diagnosis… into colonoscopy, if acceptable and reproducible accuracy for colorectal neoplasia is demonstrated in high quality multicenter in vivo clinical studies[27].” Guidance and guidelines have been produced to aid gastroenterologists in conducting, reviewing and interpreting CADe studies with the goal of accelerating the entrance of this technology into routine clinical practice[28].

CADx

CADx systems for the characterization of colorectal polyps have been developed using a variety of imaging modalities including white light endoscopy, magnifying NBI (M-NBI), autofluorescence endoscopy, endocytoscopy, and magnifying chromoendoscopy (Figure 2). The majority of studies examining these technologies are retrospective in nature. Only six prospective trials have been performed, and none of them were randomized controlled trials[10,29-33]. Aihara et al[32] published the first prospective CADx trial for colorectal lesions in 2013. Investigators used autofluorescence endoscopy to distinguish between neoplastic and non-neoplastic lesions. They evaluated 32 patients with 102 colorectal lesions. The CADx system had a sensitivity, specificity, PPV, and NPV of 94.2%, 88.9%, 95.6%, and 85.2% respectively[32]. Kuiper et al[30] performed another trial using autofluorescence endoscopy and CADx that included 87 patients with 207 colorectal lesions. This study achieved a NPV 73.5%. In a subsequent study using the next generation model of the same device on 27 patients with 137 diminutive colorectal polyps, Rath et al[31] reported an improved NPV of 96.1% meeting the PIVI-2 criteria for the “diagnose-and-leave” strategy. A more recent study utilizing autofluorescence endoscopy was published by Horiuchi et al[33] in 2019. The authors evaluated 95 patients with 429 diminutive colorectal polyps and found a NPV for rectosigmoid polyps of 93.4%. When evaluating rectosigmoid and non-rectosigmoid polyps together, however, the NPV decreased to 80.8%. Kominami et al[29] utilized M-NBI in a study of 41 patients with 118 colorectal lesions. That trial achieved a NPV of 93.3% and the recommendations for follow-up colonoscopy based on the CADx system and pathology were identical for 92.7% of patients. Thus, their system surpassed the PIVI criteria for both the “diagnose-and-leave” and the “resect-and-discard” strategies. Mori et al[10] performed the largest prospective CADx trial to date, which included 325 patients with 466 diminutive polyps. The CADx algorithm in this trial analyzed endocytoscopy images after application of NBI or methylene blue dye. The authors found that for the 250 rectosigmoid polyps in their study, using the most conservative estimate, the NPV was 93.7%, meeting the PIVI-2 threshold to support a “diagnose-and-leave” strategy.

Figure 2
Figure 2 Example output from a computer-aided diagnosis system using narrow-band imaging (Fujifilm Corp., Tokyo). The system predicts whether or not the lesion of interest is neoplastic. Used with the permission of Fujifilm.
ESOPHAGOGASTRODUODENOSCOPY

Many upper GI malignant processes, including esophageal and gastric pre-cancerous and cancerous lesions are easy to miss and can be confused with benign processes such as esophagitis or gastritis. In addition, if a patient has numerous lesions, it becomes difficult to determine which lesions require biopsy. Even with a significant amount of training, 20%-25% of early gastric cancer is missed when utilizing high-definition white light endoscopy[34]. Consequently, much work has focused on using AI to improve the detection and diagnosis of these increasingly prevalent lesions.

Detection of early gastric cancer

In 2015, Miyaki et al[35] utilized SVM to delineate early gastric cancer using esophagogastroduodenoscopy (EGD) with M-NBI on 95 patients from a single hospital in Japan. This was the first study to delineate gastric cancerous lesions relative to noncancerous reddened lesions or surrounding tissue using an SVM based traditional machine learning algorithm[35]. This idea was expanded on by Kanesaka et al[36] who utilized SVM in real time with M-NBI to detect lesions concerning for early gastric cancers. In this retrospective study the CADe system achieved an accuracy, sensitivity, and specificity of 96.3%, 96.7%, and 95%, respectively[36]. Kanesaka et al[36] demonstrated the power of SVM relating to detection of gastric cancer but their study was limited by its sample size (81 test images), lesion type (focused only on depressed-type lesions), and selection bias. In 2018, Hirasawa et al[37] developed a CNN-based system for detecting early and advanced gastric cancer. This system was trained on 13584 images and tested on 2296 from 69 patients demonstrating a sensitivity of 92.2% and a PPV of 30.6%[37]. Most false positives were related to gastritis[37]. Overall, this study provided sufficient evidence that a deep neural network-based approach was feasible for the detection of early gastric cancer, but several limitations were also noted. Li et al[38] applied a CNN based system to M-NBI for the detection of early gastric cancer. This system was trained on 2088 images and tested on 341 images achieving an accuracy, sensitivity, and specificity of 90.91%, 91.18%, and 90.64%, respectively, with a significant improvement in sensitivity relative to “expert” endoscopists[38]. The accuracy, sensitivity, and specificity of the Li et al[38] study were lower than results published by Kanesaka et al[36] with SVM. These differences, however, are difficult to compare directly given varied nomenclature and histologic interpretation by groups from different countries.

Zhu et al[39] developed another CNN-based system in 2019 with the ability to determine the invasion depth of gastric cancer. 790 images were used for training and 203 images were used to test the system[39]. They were able to achieve a sensitivity and specificity of 76.47% and 95.56%, respectively, with a PPV and NPV of 89.66% and 88.97%, respectively on the test dataset[40]. They also demonstrated that the CNN-based system had a significantly higher accuracy for the determination of invasion depth compared to a small group of 17 endoscopists[39]. This study was the first to use CNN to evaluate the depth of gastric cancer and has significant potential clinical utility. Major limitations include a small sample size, lack of validation and testing on video or live endoscopy, and the fact that the data was collected from a single center using a single type of endoscope.

Wu et al[39] described the use of CNN to help eliminate blind spots and detect early gastric cancer. In regards to classifying gastric locations, their CNN-based approach had an accuracy of 90% and 65.9% when dividing the stomach into 10 and 26 parts respectively[39]. For the detection of early gastric cancer, this study achieved promising results with an accuracy of 92.5%, sensitivity of 94.0%, specificity of 91.0%, PPV of 91.3%, and NPV of 93.8%[39]. In 2021, Wu et al[39,40] published the first multi-center randomized control trial investigating the detection of blind spots and early gastric cancer using an updated version of their CNN based AI discussed above. In this study, 1050 patients from 5 hospitals were randomized to receive AI-assisted endoscopy or standard-of-care endoscopy. The AI-assisted group had significantly fewer blind spots. The accuracy, sensitivity, and specificity of the system were 84.69%, 100%, and 84.29% respectively for the detection of gastric cancer[40]. The trial yielded a lower accuracy and specificity relative to previous publications and the single center study by Li et al[38] However, this was the first study of its kind to evaluate a CNN-based system prospectively in a randomized clinical trial.

Barrett’s esophagus

In the United States, esophageal adenocarcinoma accounts for approximately two thirds of newly diagnosed esophageal cancers and is associated with a poor prognosis if identified in the late stages[41]. When identified, esophageal premalignant lesions can be treated via ablation or endoscopic resection, drastically improving outcomes[42,43]. Traditionally, “random” biopsies were obtained with a relatively low diagnostic yield as lesions concerning for neoplasia in patients with Barrett’s esophagus (BE) are often challenging to identify. Recently, several groups have studied the implementation of AI during EGD for screening and surveillance of BE. In 2016, van der Sommen et al[44] published the first study using machine learning for the detection of early neoplastic lesions in BE. The algorithm achieved a sensitivity of 86% and specificity of 87%[44] but the initial algorithm did not outperform an expert endoscopist during the length of their study. Swager et al[45] expanded on this concept and developed a machine learning algorithm for volumetric laser endomicroscopy (VLE). The resultant system achieved a sensitivity and specificity of 90% and 93%, respectively[45]. It also outperformed a clinical VLE prediction score[45]. In 2019, the ARGOS consortium developed a CADe system to detect Berrett’s lesions using white light endoscopy (WLE), which achieved an accuracy, sensitivity, and specificity of 92%, 95%, and 85%, respectively[46]. Although their approach yielded highly accurate results, it was tested on high quality images and limited by human perceptual bias as the algorithm was trained to detect abnormalities based on variations in color and texture. The ARGOS consortium sought to improve on their initial approach by developing a deep learning-based CADe system built on a hybrid ResNet-UNet CNN[47]. This method achieved 89% accuracy, 90% sensitivity, and 88% specificity for the detection of neoplasms and nondysplastic BE[47]. Their deep-learning based CADe system also out performed 53 international endoscopic assessors ranging in experience from research fellows with no endoscopic expertise to board-certified endoscopists with greater than 5 years of experience[47]. The authors also implemented their algorithm during live endoscopic procedures on 10 patients with BE[48]. The system achieved an accuracy, sensitivity, and specificity of 90%, 91%, and 89%, respectively during clinical use[48]. Hashimoto et al[49] also demonstrated the power of a CNN-based algorithm for the detection and classification of early esophageal neoplasia. On 458 test images they achieved a sensitivity of 96.4%, specificity of 94.2%, and accuracy of 95.4% at a speed allowing for implementation during live endoscopy[49]. Though we are starting to see the implementation of CNN-based systems prospectively in the clinical trial setting, in the near future we will likely see the first publication of multi-center, randomized clinical trials utilizing AI for the detection of neoplasia in patient with BE.

Detection of esophageal squamous cell carcinoma

In 2019, Horie et al[50] published the first study applying CNN-based systems to EGD for the detection of esophageal cancer. This was a single center trial that used 8428 images from 384 patients for training and 1118 images from 97 patients for testing[50]. The system achieved a sensitivity of 98% and specificity of 79% with a PPV of 40% and NPV of 95% for the diagnosis of esophageal cancer[50]. Shadows were the most common cause of false positives and background mucosal inflammation was the most common cause of a false negative[50]. Cai et al[51] utilized CNN for the detection of esophageal squamous cell carcinoma (SCC) by initially training it with 2428 images from 746 patients and testing it on 187 images form 52 patients. They achieved an accuracy, sensitivity, specificity, PPV, and NPV of 91.4%, 97.8%, 85.4%, 86.4%, and 97.6% respectively[51]. They also demonstrated that the use of CNN significantly increased both accuracy and sensitivity of esophageal SCC detection by junior, mid-level, and senior endoscopists while reviewing still images[51]. Guo et al[52] trained a CNN-based system on 6473 narrow-band images that was validated using 6671 images and which achieved a sensitivity of 98.04% and a specificity of 95.03% for the detection of precancerous lesions or early esophageal SCC. Authors also tested the system on 27 non-magnifying videos and achieved a per-frame sensitivity of 60.8% and per-lesion sensitivity of 100%[52]. When applied to 20 magnifying videos, the per-frame sensitivity increased to 96.1%, and the per-lesion sensitivity remained at 100%[52]. Another group using CNN with endoscopy to detect SCC demonstrated no significant difference in accuracy, sensitivity, and specificity between AI diagnosis or endoscopist diagnosis using narrow-band imaging or white light imaging[53]. Liu et al[54] constructed a 2 stream CNN system achieving an accuracy of 85.83%, sensitivity of 94.23%, and specificity of 94.67% outperforming SVM based methods with the same data set. Fukuda et al[55], developed a CNN based algorithm to detect SCC with NBI/BLI to detect and characterize suspicious lesions. For lesion detection, the system achieved a sensitivity, specificity, and accuracy of 91%, 51%, and 63% respectively[55]. The algorithm outperformed experts with regards to sensitivity but underperformed when it came to specificity and accuracy[55]. However, when it came to characterization of lesions, the CNN based algorithm outperformed expert endoscopists by achieving a specificity, sensitivity, and accuracy of 86%, 89%, and 88% respectively[55]. As can be seen for many other CADe and CADx systems, over a relatively short time period, we have seen significant advances in the early detection of pre-malignant lesions and a shift from traditional machine learning to deep neural networks.

CAPSULE ENDOSCOPY

Traditional endoscopic techniques allow for the visualization of the esophagus, stomach, duodenum, terminal ileum, and colon. With the advent of push enteroscopy, we have the ability to reach the proximal jejunum, but are still unable to explore most of the small intestine. Capsule endoscopy (CE) uses a 26 mm × 11 mm pill sized video camera that is swallowed and allows for the wireless transmission of video from the whole GI tract. CE allows for visualization of portions of the jejunum and ileum previously unreachable or difficult to reach. Unlike traditional endoscopy, CE is unable to be controlled by an operator so important pathology can be missed, and there is no way to intervene immediately if an abnormality is identified. CE is also limited by an eight- to twelve-hour battery life and the risk of obstruction in patients with strictures. Even with its limitations, CE has become an important tool for the diagnosis of GI pathology.

Decades after its initial conception, the first CE was approved for use in 2001 by the Food and Drug Administration (FDA), ushering in a new era of discovery[56]. As the practice of CE became more mainstream, physicians were tasked with interpreting many hours of video averaging between 30-120 min with a staggering 50000-60000 frames per study[57,58]. It is an incredibly arduous task for an endoscopist to maintain their attention and consistently identify evidence of pathology in as little as 1 frame while combing through hours upon hours of video. The miss rate in this setting has been reported to be at least 50% in a small blinded study from 2012[59]. Recently we have seen the parallel development of AI algorithms to help interpret the swaths of data generated by CE studies. Initially the development approach was based on traditional machine learning with many studies utilizing SVM, but the field has made a substantial shift towards deep learning primary through CNN, which, in general have afforded favorable performance characteristics.

A major application of CE is the ability to noninvasively identify polyps and lesions concerning for malignancy throughout the GI tract. Early efforts consisted of traditional machine learning algorithms such as SVC that were designed to identify the presence or absence of a polyp instance. One early paper using a binary classifier based on geometrical analysis demonstrated 47% sensitivity per frame and over 81% sensitivity per polyp with a specificity of 90%[60]. Using a boosting-based approach, Silva et al[61] achieved a sensitivity of 91.0% and a specificity of 95.2% for polyp detection with CE. This was expanded on by Iakovidis et al[62], whose color feature-based pattern recognition was utilized to subclassify lesions. Liu et al[63] implemented multiscale textural features and an SVM based feature selection method to enhance the process of polyp classification that was 97.3% accurate, 97.8% sensitive, and 96.7% specific. Various groups sought to improve traditional machine learning approaches by using a genetic fuzzy based improved kernel SVM[64] and by using ensemble learning[65].

A study from 2020 investigated the application of a CNN based system to CE for the detection of protruding lesions including polyps, nodules, epithelial tumors, submucosal tumors and venous structures[66]. In this particular study the sensitivity and specificity for detecting any protruding lesion [on test images] were 90.7% and 79.8% respectively[66]. Subgroup analysis of the data yielded a sensitivity of 86.5% for polyp detection[66]. When applied to patients the sensitivity for protruding lesions increased to 98.6%[66]. Currently, the well-established SVM-based detection methods for polyps appear to be superior for the detection/classification of polyps but perhaps further training and studies are required for CNN to outperform SVMs, and all of these studies are pre-clinical.

ENDOSCOPIC ULTRASOUND

AI applications for endoscopic ultrasound (EUS) are still in nascent stages. The majority of work utilizing AI for EUS has focused on diagnosing pancreatic cancer. A variety of conventional machine learning techniques including PCA, SVM and artificial neural networks have been utilized[67-69]. Recently, Kuwahara et al[70] performed the first deep learning based study using a CNN to predict malignancy in intraductal papillary mucinous neoplasms. They trained their algorithm on 3970 still images and achieved a sensitivity, specificity, PPV, NPV, and accuracy of 95.7%, 92.6%, 91.7%, 96.2%, and 94.0%, respectively. Of note, the human accuracy for predicting IPMN malignancy in this study was only 56.0%. In 2020, Marya et al[71] performed a retrospective study using a CNN-system to differentiate autoimmune pancreatitis from pancreatic ductal adenocarcinoma (PDAC). The system was 90% sensitive and 93% specific for differentiating autoimmune pancreatitis from PDAC.

Outside of the field of pancreatic cancer, Minoda et al[72] published a retrospective study evaluating the ability of a CNN-system to diagnose gastrointestinal stromal tumors among subepithelial lesions (SEL) using EUS images. Among 30 SELs ≥ 20 mm the system achieved an accuracy, sensitivity, and specificity or 90.0%, 91.7%, and 83.3% respectively. Finally, Marya et al[73] utilized a CNN to identify focal liver lesions (FLL) and classify them as malignant or benign. The authors included a total of 210685 EUS images in their study. Their algorithm correctly identified 92% of FLLs. When evaluating video data, they achieved a sensitivity of 100% and specificity of 80% for the classification of malignant FLLs.

CONCLUSION

AI technology applied to gastrointestinal oncology has an exciting and potent future and the potential to decrease morbidity, mortality and costs. Research groups have demonstrated how AI can augment the detection and diagnosis of numerous GI malignancies. This field is growing rapidly, but it is still in its infancy. Although we have recently seen the first prospective, randomized trials emerging in several spaces, most studies in this field are still retrospective. Furthermore, the majority of datasets used to train the algorithms used in these studies were collected from single-center databases in heterogenous patient populations. Consequently, these studies are at high risk of selection bias and with models at risk for overfitting. In order to create robust tools ready for general clinical practice, multicenter, randomized controlled clinical trials conducted by endoscopists of various skill levels on diverse patient populations and utilizing robustly trained and validated models are needed. Additionally, it will be important to monitor the efficacy of these tools in the real-world setting. Finally, clinicians will need to collaborate with lawmakers and other stakeholders to determine how best to regulate these technologies and establish clear policies on accountability. In clinical practice today, AI serves as a “safety net” for physicians. It is there to serve as a second set of eyes to support a diagnosis only. We believe it will be many years before AI is used to make definitive diagnosis or drive management decisions. Gastroenterologists should work to familiarize themselves with the strength and limitations of these technologies so they can take an active role in a future AI-assisted healthcare system.

Footnotes

Provenance and peer review: Invited article; Externally peer reviewed.

Peer-review model: Single blind

Corresponding Author's Membership in Professional Societies: American College of Gastroenterology; American Gastroenterological Association; and American Society for Gastrointestinal Endoscopy.

Specialty type: Gastroenterology and hepatology

Country/Territory of origin: United States

Peer-review report’s scientific quality classification

Grade A (Excellent): A

Grade B (Very good): 0

Grade C (Good): C, C

Grade D (Fair): 0

Grade E (Poor): 0

P-Reviewer: Costache RS, Romania; Hanada E, Japan; Ma J, China S-Editor: Gong ZM L-Editor: A P-Editor: Gong ZM

References
1.  Rehnberg V, Walters E. The life and work of Adolph Kussmaul 1822-1902: 'Sword swallowers in modern medicine'. J Intensive Care Soc. 2017;18:71-72.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2]  [Cited by in F6Publishing: 3]  [Article Influence: 0.4]  [Reference Citation Analysis (0)]
2.  Wang P, Berzin TM, Glissen Brown JR, Bharadwaj S, Becq A, Xiao X, Liu P, Li L, Song Y, Zhang D, Li Y, Xu G, Tu M, Liu X. Real-time automatic detection system increases colonoscopic polyp and adenoma detection rates: a prospective randomised controlled study. Gut. 2019;68:1813-1819.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 398]  [Cited by in F6Publishing: 450]  [Article Influence: 90.0]  [Reference Citation Analysis (0)]
3.  Bray F, Ferlay J, Soerjomataram I, Siegel RL, Torre LA, Jemal A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin. 2018;68:394-424.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 53206]  [Cited by in F6Publishing: 50885]  [Article Influence: 8480.8]  [Reference Citation Analysis (44)]
4.  Nishihara R, Wu K, Lochhead P, Morikawa T, Liao X, Qian ZR, Inamura K, Kim SA, Kuchiba A, Yamauchi M, Imamura Y, Willett WC, Rosner BA, Fuchs CS, Giovannucci E, Ogino S, Chan AT. Long-term colorectal-cancer incidence and mortality after lower endoscopy. N Engl J Med. 2013;369:1095-1105.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 968]  [Cited by in F6Publishing: 1047]  [Article Influence: 95.2]  [Reference Citation Analysis (0)]
5.  Kahi CJ, Imperiale TF, Juliar BE, Rex DK. Effect of screening colonoscopy on colorectal cancer incidence and mortality. Clin Gastroenterol Hepatol. 2009;7:770-5; quiz 711.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 271]  [Cited by in F6Publishing: 280]  [Article Influence: 18.7]  [Reference Citation Analysis (0)]
6.  Kaminski MF, Regula J, Kraszewska E, Polkowski M, Wojciechowska U, Didkowska J, Zwierko M, Rupinski M, Nowacki MP, Butruk E. Quality indicators for colonoscopy and the risk of interval cancer. N Engl J Med. 2010;362:1795-1803.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1287]  [Cited by in F6Publishing: 1351]  [Article Influence: 96.5]  [Reference Citation Analysis (0)]
7.  Ahn SB, Han DS, Bae JH, Byun TJ, Kim JP, Eun CS. The Miss Rate for Colorectal Adenoma Determined by Quality-Adjusted, Back-to-Back Colonoscopies. Gut Liver. 2012;6:64-70.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 125]  [Cited by in F6Publishing: 130]  [Article Influence: 10.8]  [Reference Citation Analysis (0)]
8.  Aslanian HR, Shieh FK, Chan FW, Ciarleglio MM, Deng Y, Rogart JN, Jamidar PA, Siddiqui UD. Nurse observation during colonoscopy increases polyp detection: a randomized prospective study. Am J Gastroenterol. 2013;108:166-172.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 69]  [Cited by in F6Publishing: 77]  [Article Influence: 7.0]  [Reference Citation Analysis (0)]
9.  Marcondes FO, Gourevitch RA, Schoen RE, Crockett SD, Morris M, Mehrotra A. Adenoma Detection Rate Falls at the End of the Day in a Large Multi-site Sample. Dig Dis Sci. 2018;63:856-859.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 9]  [Cited by in F6Publishing: 15]  [Article Influence: 2.5]  [Reference Citation Analysis (0)]
10.  Mori Y, Kudo SE, Misawa M, Saito Y, Ikematsu H, Hotta K, Ohtsuka K, Urushibara F, Kataoka S, Ogawa Y, Maeda Y, Takeda K, Nakamura H, Ichimasa K, Kudo T, Hayashi T, Wakamura K, Ishida F, Inoue H, Itoh H, Oda M, Mori K. Real-Time Use of Artificial Intelligence in Identification of Diminutive Polyps During Colonoscopy: A Prospective Study. Ann Intern Med. 2018;169:357-366.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 299]  [Cited by in F6Publishing: 300]  [Article Influence: 50.0]  [Reference Citation Analysis (1)]
11.  ASGE Technology Committee; Abu Dayyeh BK, Thosani N, Konda V, Wallace MB, Rex DK, Chauhan SS, Hwang JH, Komanduri S, Manfredi M, Maple JT, Murad FM, Siddiqui UD, Banerjee S. ASGE Technology Committee systematic review and meta-analysis assessing the ASGE PIVI thresholds for adopting real-time endoscopic assessment of the histology of diminutive colorectal polyps. Gastrointest Endosc. 2015;81:502.e1-502.e16.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 210]  [Cited by in F6Publishing: 220]  [Article Influence: 24.4]  [Reference Citation Analysis (0)]
12.  Iakovidis DK, Maroulis DE, Karkanis SA. An intelligent system for automatic detection of gastrointestinal adenomas in video endoscopy. Comput Biol Med. 2006;36:1084-1103.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 96]  [Cited by in F6Publishing: 67]  [Article Influence: 3.5]  [Reference Citation Analysis (0)]
13.  Maroulis DE, Iakovidis DK, Karkanis SA, Karras DA. CoLD: a versatile detection system for colorectal lesions in endoscopy video-frames. Comput Methods Programs Biome70:151-166.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 96]  [Cited by in F6Publishing: 59]  [Article Influence: 2.8]  [Reference Citation Analysis (0)]
14.  Cogan T, Cogan M, Tamil L. MAPGI: Accurate identification of anatomical landmarks and diseased tissue in gastrointestinal tract using deep learning. Comput Biol Med. 2019;111:103351.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 32]  [Cited by in F6Publishing: 33]  [Article Influence: 6.6]  [Reference Citation Analysis (0)]
15.  Bilal M, Glissen Brown JR, Berzin TM. Using Computer-Aided Polyp Detection During Colonoscopy. Am J Gastroenterol. 2020;115:963-966.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 9]  [Cited by in F6Publishing: 6]  [Article Influence: 1.5]  [Reference Citation Analysis (0)]
16.  Cybernet Systems Co., Ltd  EndoBRAIN - Artificial intelligence system that supports optical diagnosis of colorectal polyps - was approved by PMDA (Pharmaceuticals and Medical Devices Agency), a regulatory body in Japan. 2018. Available from: https://www.cybernet.jp/english/documents/pdf/news/press/2018/20181210.pdf.  [PubMed]  [DOI]  [Cited in This Article: ]
17.  Medtronic  Medtronic Launches the First Artificial Intelligence System for Colonoscopy at United European Gastroenterology Week 2019 | Medtronic. 2019. Available from: https://newsroom.medtronic.com/news-releases/news-release-details/medtronic-launches-first-artificial-intelligence-system/.  [PubMed]  [DOI]  [Cited in This Article: ]
18.  FDA  FDA Authorizes Marketing of First Device that Uses Artificial Intelligence to Help Detect Potential Signs of Colon Cancer. 2021. Available from: https://www.fda.gov/news-events/press-announcements/fda-authorizes-marketing-first-device-uses-artificial-intelligence-help-detect-potential-signs-colon.  [PubMed]  [DOI]  [Cited in This Article: ]
19.  Repici A, Badalamenti M, Maselli R, Correale L, Radaelli F, Rondonotti E, Ferrara E, Spadaccini M, Alkandari A, Fugazza A, Anderloni A, Galtieri PA, Pellegatta G, Carrara S, Di Leo M, Craviotto V, Lamonaca L, Lorenzetti R, Andrealli A, Antonelli G, Wallace M, Sharma P, Rosch T, Hassan C. Efficacy of Real-Time Computer-Aided Detection of Colorectal Neoplasia in a Randomized Trial. Gastroenterology. 2020;159:512-520.e7.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 237]  [Cited by in F6Publishing: 292]  [Article Influence: 73.0]  [Reference Citation Analysis (0)]
20.  Gong D, Wu L, Zhang J, Mu G, Shen L, Liu J, Wang Z, Zhou W, An P, Huang X, Jiang X, Li Y, Wan X, Hu S, Chen Y, Hu X, Xu Y, Zhu X, Li S, Yao L, He X, Chen D, Huang L, Wei X, Wang X, Yu H. Detection of colorectal adenomas with a real-time computer-aided system (ENDOANGEL): a randomised controlled study. Lancet Gastroenterol Hepatol. 2020;5:352-361.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 139]  [Cited by in F6Publishing: 207]  [Article Influence: 51.8]  [Reference Citation Analysis (0)]
21.  Liu WN, Zhang YY, Bian XQ, Wang LJ, Yang Q, Zhang XD, Huang J. Study on detection rate of polyps and adenomas in artificial-intelligence-aided colonoscopy. Saudi J Gastroenterol. 2020;26:13-19.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 82]  [Cited by in F6Publishing: 103]  [Article Influence: 20.6]  [Reference Citation Analysis (0)]
22.  Su JR, Li Z, Shao XJ, Ji CR, Ji R, Zhou RC, Li GC, Liu GQ, He YS, Zuo XL, Li YQ. Impact of a real-time automatic quality control system on colorectal polyp and adenoma detection: a prospective randomized controlled study (with videos). Gastrointest Endosc. 2020;91:415-424.e4.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 153]  [Cited by in F6Publishing: 178]  [Article Influence: 44.5]  [Reference Citation Analysis (0)]
23.  Wang P, Liu X, Berzin TM, Glissen Brown JR, Liu P, Zhou C, Lei L, Li L, Guo Z, Lei S, Xiong F, Wang H, Song Y, Pan Y, Zhou G. Effect of a deep-learning computer-aided detection system on adenoma detection during colonoscopy (CADe-DB trial): a double-blind randomised study. Lancet Gastroenterol Hepatol. 2020;5:343-351.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 164]  [Cited by in F6Publishing: 241]  [Article Influence: 60.3]  [Reference Citation Analysis (0)]
24.  Liu P, Wang P, Glissen Brown JR, Berzin TM, Zhou G, Liu W, Xiao X, Chen Z, Zhang Z, Zhou C, Lei L, Xiong F, Li L, Liu X. The single-monitor trial: an embedded CADe system increased adenoma detection during colonoscopy: a prospective randomized study. Therap Adv Gastroenterol. 2020;13:1756284820979165.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 25]  [Cited by in F6Publishing: 25]  [Article Influence: 6.3]  [Reference Citation Analysis (0)]
25.  Mohan BP, Facciorusso A, Khan SR, Chandan S, Kassab LL, Gkolfakis P, Tziatzios G, Triantafyllou K, Adler DG. Real-time computer aided colonoscopy versus standard colonoscopy for improving adenoma detection rate: A meta-analysis of randomized-controlled trials. EClinicalMedicine. 2020;29-30:100622.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 13]  [Cited by in F6Publishing: 16]  [Article Influence: 4.0]  [Reference Citation Analysis (0)]
26.  Lieberman D, Sullivan BA, Hauser ER, Qin X, Musselwhite LW, O'Leary MC, Redding TS 4th, Madison AN, Bullard AJ, Thomas R, Sims KJ, Williams CD, Hyslop T, Weiss D, Gupta S, Gellad ZF, Robertson DJ, Provenzale D. Baseline Colonoscopy Findings Associated With 10-Year Outcomes in a Screening Cohort Undergoing Colonoscopy Surveillance. Gastroenterology. 2020;158:862-874.e8.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 39]  [Cited by in F6Publishing: 41]  [Article Influence: 10.3]  [Reference Citation Analysis (0)]
27.  Bisschops R, East JE, Hassan C, Hazewinkel Y, Kamiński MF, Neumann H, Pellisé M, Antonelli G, Bustamante Balen M, Coron E, Cortas G, Iacucci M, Yuichi M, Longcroft-Wheaton G, Mouzyka S, Pilonis N, Puig I, van Hooft JE, Dekker E. Advanced imaging for detection and differentiation of colorectal neoplasia: European Society of Gastrointestinal Endoscopy (ESGE) Guideline - Update 2019. Endoscopy. 2019;51:1155-1179.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 152]  [Cited by in F6Publishing: 187]  [Article Influence: 37.4]  [Reference Citation Analysis (1)]
28.  Liu X, Cruz Rivera S, Moher D, Calvert MJ, Denniston AK; SPIRIT-AI and CONSORT-AI Working Group. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Lancet Digit Health. 2020;2:e537-e548.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 109]  [Cited by in F6Publishing: 94]  [Article Influence: 23.5]  [Reference Citation Analysis (0)]
29.  Kominami Y, Yoshida S, Tanaka S, Sanomura Y, Hirakawa T, Raytchev B, Tamaki T, Koide T, Kaneda K, Chayama K. Computer-aided diagnosis of colorectal polyp histology by using a real-time image recognition system and narrow-band imaging magnifying colonoscopy. Gastrointest Endosc. 2016;83:643-649.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 157]  [Cited by in F6Publishing: 147]  [Article Influence: 18.4]  [Reference Citation Analysis (0)]
30.  Kuiper T, Alderlieste YA, Tytgat KM, Vlug MS, Nabuurs JA, Bastiaansen BA, Löwenberg M, Fockens P, Dekker E. Automatic optical diagnosis of small colorectal lesions by laser-induced autofluorescence. Endoscopy. 2015;47:56-62.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 13]  [Cited by in F6Publishing: 20]  [Article Influence: 2.2]  [Reference Citation Analysis (0)]
31.  Rath T, Tontini GE, Vieth M, Nägel A, Neurath MF, Neumann H. In vivo real-time assessment of colorectal polyp histology using an optical biopsy forceps system based on laser-induced fluorescence spectroscopy. Endoscopy. 2016;48:557-562.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 46]  [Cited by in F6Publishing: 49]  [Article Influence: 6.1]  [Reference Citation Analysis (0)]
32.  Aihara H, Saito S, Inomata H, Ide D, Tamai N, Ohya TR, Kato T, Amitani S, Tajiri H. Computer-aided diagnosis of neoplastic colorectal lesions using 'real-time' numerical color analysis during autofluorescence endoscopy. Eur J Gastroenterol Hepatol. 2013;25:488-494.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 42]  [Cited by in F6Publishing: 46]  [Article Influence: 4.2]  [Reference Citation Analysis (0)]
33.  Horiuchi H, Tamai N, Kamba S, Inomata H, Ohya TR, Sumiyama K. Real-time computer-aided diagnosis of diminutive rectosigmoid polyps using an auto-fluorescence imaging system and novel color intensity analysis software. Scand J Gastroenterol. 2019;54:800-805.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 25]  [Cited by in F6Publishing: 30]  [Article Influence: 6.0]  [Reference Citation Analysis (0)]
34.  Kaise M. Advanced endoscopic imaging for early gastric cancer. Best Pract Res Clin Gastroenterol. 2015;29:575-587.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 27]  [Cited by in F6Publishing: 27]  [Article Influence: 3.0]  [Reference Citation Analysis (0)]
35.  Miyaki R, Yoshida S, Tanaka S, Kominami Y, Sanomura Y, Matsuo T, Oka S, Raytchev B, Tamaki T, Koide T, Kaneda K, Yoshihara M, Chayama K. A computer system to be used with laser-based endoscopy for quantitative diagnosis of early gastric cancer. J Clin Gastroenterol. 2015;49:108-115.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 48]  [Cited by in F6Publishing: 42]  [Article Influence: 4.7]  [Reference Citation Analysis (0)]
36.  Kanesaka T, Lee TC, Uedo N, Lin KP, Chen HZ, Lee JY, Wang HP, Chang HT. Computer-aided diagnosis for identifying and delineating early gastric cancers in magnifying narrow-band imaging. Gastrointest Endosc. 2018;87:1339-1344.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 108]  [Cited by in F6Publishing: 108]  [Article Influence: 18.0]  [Reference Citation Analysis (0)]
37.  Hirasawa T, Aoyama K, Tanimoto T, Ishihara S, Shichijo S, Ozawa T, Ohnishi T, Fujishiro M, Matsuo K, Fujisaki J, Tada T. Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images. Gastric Cancer. 2018;21:653-660.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 389]  [Cited by in F6Publishing: 373]  [Article Influence: 62.2]  [Reference Citation Analysis (0)]
38.  Li L, Chen Y, Shen Z, Zhang X, Sang J, Ding Y, Yang X, Li J, Chen M, Jin C, Chen C, Yu C. Convolutional neural network for the diagnosis of early gastric cancer based on magnifying narrow band imaging. Gastric Cancer. 2020;23:126-132.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 105]  [Cited by in F6Publishing: 112]  [Article Influence: 28.0]  [Reference Citation Analysis (0)]
39.  Niu PH, Zhao LL, Wu HL, Zhao DB, Chen YT. Artificial intelligence in gastric cancer: Application and future perspectives. World J Gastroenterol. 2020;26:5408-5419.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in CrossRef: 56]  [Cited by in F6Publishing: 51]  [Article Influence: 12.8]  [Reference Citation Analysis (1)]
40.  Zhu Y, Wang QC, Xu MD, Zhang Z, Cheng J, Zhong YS, Zhang YQ, Chen WF, Yao LQ, Zhou PH, Li QL. Application of convolutional neural network in the diagnosis of the invasion depth of gastric cancer based on conventional endoscopy. Gastrointest Endosc. 2019;89:806-815.e1.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 201]  [Cited by in F6Publishing: 195]  [Article Influence: 39.0]  [Reference Citation Analysis (0)]
41.  Patel N, Benipal B. Incidence of Esophageal Cancer in the United States from 2001-2015: A United States Cancer Statistics Analysis of 50 States. Cureus. 2018;10:e3709.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 6]  [Cited by in F6Publishing: 17]  [Article Influence: 2.8]  [Reference Citation Analysis (0)]
42.  Behrens A, Pech O, Graupe F, May A, Lorenz D, Ell C. Barrett's adenocarcinoma of the esophagus: better outcomes through new methods of diagnosis and treatment. Dtsch Arztebl Int. 2011;108:313-319.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 6]  [Cited by in F6Publishing: 11]  [Article Influence: 0.8]  [Reference Citation Analysis (0)]
43.  Phoa KN, Pouw RE, Bisschops R, Pech O, Ragunath K, Weusten BL, Schumacher B, Rembacken B, Meining A, Messmann H, Schoon EJ, Gossner L, Mannath J, Seldenrijk CA, Visser M, Lerut T, Seewald S, ten Kate FJ, Ell C, Neuhaus H, Bergman JJ. Multimodality endoscopic eradication for neoplastic Barrett oesophagus: results of an European multicentre study (EURO-II). Gut. 2016;65:555-562.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 166]  [Cited by in F6Publishing: 179]  [Article Influence: 22.4]  [Reference Citation Analysis (0)]
44.  van der Sommen F, Zinger S, Curvers WL, Bisschops R, Pech O, Weusten BL, Bergman JJ, de With PH, Schoon EJ. Computer-aided detection of early neoplastic lesions in Barrett's esophagus. Endoscopy. 2016;48:617-624.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 111]  [Cited by in F6Publishing: 110]  [Article Influence: 13.8]  [Reference Citation Analysis (1)]
45.  Swager AF, van der Sommen F, Klomp SR, Zinger S, Meijer SL, Schoon EJ, Bergman JJGHM, de With PH, Curvers WL. Computer-aided detection of early Barrett's neoplasia using volumetric laser endomicroscopy. Gastrointest Endosc. 2017;86:839-846.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 103]  [Cited by in F6Publishing: 106]  [Article Influence: 15.1]  [Reference Citation Analysis (0)]
46.  de Groof J, van der Sommen F, van der Putten J, Struyvenberg MR, Zinger S, Curvers WL, Pech O, Meining A, Neuhaus H, Bisschops R, Schoon EJ, de With PH, Bergman JJ. The Argos project: The development of a computer-aided detection system to improve detection of Barrett's neoplasia on white light endoscopy. United European Gastroenterol J. 2019;7:538-547.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 83]  [Cited by in F6Publishing: 75]  [Article Influence: 15.0]  [Reference Citation Analysis (0)]
47.  de Groof AJ, Struyvenberg MR, van der Putten J, van der Sommen F, Fockens KN, Curvers WL, Zinger S, Pouw RE, Coron E, Baldaque-Silva F, Pech O, Weusten B, Meining A, Neuhaus H, Bisschops R, Dent J, Schoon EJ, de With PH, Bergman JJ. Deep-Learning System Detects Neoplasia in Patients With Barrett's Esophagus With Higher Accuracy Than Endoscopists in a Multistep Training and Validation Study With Benchmarking. Gastroenterology. 2020;158:915-929.e4.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 172]  [Cited by in F6Publishing: 174]  [Article Influence: 43.5]  [Reference Citation Analysis (0)]
48.  de Groof AJ, Struyvenberg MR, Fockens KN, van der Putten J, van der Sommen F, Boers TG, Zinger S, Bisschops R, de With PH, Pouw RE, Curvers WL, Schoon EJ, Bergman JJGHM. Deep learning algorithm detection of Barrett's neoplasia with high accuracy during live endoscopic procedures: a pilot study (with video). Gastrointest Endosc. 2020;91:1242-1250.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 66]  [Cited by in F6Publishing: 66]  [Article Influence: 16.5]  [Reference Citation Analysis (0)]
49.  Hashimoto R, Requa J, Dao T, Ninh A, Tran E, Mai D, Lugo M, El-Hage Chehade N, Chang KJ, Karnes WE, Samarasena JB. Artificial intelligence using convolutional neural networks for real-time detection of early esophageal neoplasia in Barrett's esophagus (with video). Gastrointest Endosc. 2020;91:1264-1271.e1.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 102]  [Cited by in F6Publishing: 107]  [Article Influence: 26.8]  [Reference Citation Analysis (0)]
50.  Horie Y, Yoshio T, Aoyama K, Yoshimizu S, Horiuchi Y, Ishiyama A, Hirasawa T, Tsuchida T, Ozawa T, Ishihara S, Kumagai Y, Fujishiro M, Maetani I, Fujisaki J, Tada T. Diagnostic outcomes of esophageal cancer by artificial intelligence using convolutional neural networks. Gastrointest Endosc. 2019;89:25-32.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 240]  [Cited by in F6Publishing: 220]  [Article Influence: 44.0]  [Reference Citation Analysis (0)]
51.  Cai SL, Li B, Tan WM, Niu XJ, Yu HH, Yao LQ, Zhou PH, Yan B, Zhong YS. Using a deep learning system in endoscopy for screening of early esophageal squamous cell carcinoma (with video). Gastrointest Endosc. 2019;90:745-753.e2.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 74]  [Cited by in F6Publishing: 83]  [Article Influence: 16.6]  [Reference Citation Analysis (0)]
52.  Guo L, Xiao X, Wu C, Zeng X, Zhang Y, Du J, Bai S, Xie J, Zhang Z, Li Y, Wang X, Cheung O, Sharma M, Liu J, Hu B. Real-time automated diagnosis of precancerous lesions and early esophageal squamous cell carcinoma using a deep learning model (with videos). Gastrointest Endosc. 2020;91:41-51.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 108]  [Cited by in F6Publishing: 111]  [Article Influence: 27.8]  [Reference Citation Analysis (0)]
53.  Ohmori M, Ishihara R, Aoyama K, Nakagawa K, Iwagami H, Matsuura N, Shichijo S, Yamamoto K, Nagaike K, Nakahara M, Inoue T, Aoi K, Okada H, Tada T. Endoscopic detection and differentiation of esophageal lesions using a deep neural network. Gastrointest Endosc. 2020;91:301-309.e1.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 74]  [Cited by in F6Publishing: 72]  [Article Influence: 18.0]  [Reference Citation Analysis (0)]
54.  Liu G, Hua J, Wu Z, Meng T, Sun M, Huang P, He X, Sun W, Li X, Chen Y. Automatic classification of esophageal lesions in endoscopic images using a convolutional neural network. Ann Transl Med. 2020;8:486.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 23]  [Cited by in F6Publishing: 23]  [Article Influence: 5.8]  [Reference Citation Analysis (0)]
55.  Fukuda H, Ishihara R, Kato Y, Matsunaga T, Nishida T, Yamada T, Ogiyama H, Horie M, Kinoshita K, Tada T. Comparison of performances of artificial intelligence versus expert endoscopists for real-time assisted diagnosis of esophageal squamous cell carcinoma (with video). Gastrointest Endosc. 2020;92:848-855.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 45]  [Cited by in F6Publishing: 52]  [Article Influence: 13.0]  [Reference Citation Analysis (0)]
56.  Adler SN. The history of time for capsule endoscopy. Ann Transl Med. 2017;5:194.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 10]  [Cited by in F6Publishing: 11]  [Article Influence: 1.6]  [Reference Citation Analysis (0)]
57.  ASGE Technology Committee; Wang A, Banerjee S, Barth BA, Bhat YM, Chauhan S, Gottlieb KT, Konda V, Maple JT, Murad F, Pfau PR, Pleskow DK, Siddiqui UD, Tokar JL, Rodriguez SA. Wireless capsule endoscopy. Gastrointest Endosc. 2013;78:805-815.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 199]  [Cited by in F6Publishing: 166]  [Article Influence: 15.1]  [Reference Citation Analysis (0)]
58.  McAlindon ME, Ching HL, Yung D, Sidhu R, Koulaouzidis A. Capsule endoscopy of the small bowel. Ann Transl Med. 2016;4:369.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 29]  [Cited by in F6Publishing: 36]  [Article Influence: 4.5]  [Reference Citation Analysis (0)]
59.  Zheng Y, Hawkins L, Wolff J, Goloubeva O, Goldberg E. Detection of lesions during capsule endoscopy: physician performance is disappointing. Am J Gastroenterol. 2012;107:554-560.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 61]  [Cited by in F6Publishing: 65]  [Article Influence: 5.4]  [Reference Citation Analysis (1)]
60.  Mamonov AV, Figueiredo IN, Figueiredo PN, Tsai YH. Automated polyp detection in colon capsule endoscopy. IEEE Trans Med Imaging. 2014;33:1488-1502.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 126]  [Cited by in F6Publishing: 75]  [Article Influence: 7.5]  [Reference Citation Analysis (0)]
61.  Silva J, Histace A, Romain O, Dray X, Granado B. Toward embedded detection of polyps in WCE images for early diagnosis of colorectal cancer. Int J Comput Assist Radiol Surg. 2014;9:283-293.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 234]  [Cited by in F6Publishing: 182]  [Article Influence: 16.5]  [Reference Citation Analysis (0)]
62.  Iakovidis DK, Koulaouzidis A. Automatic lesion detection in capsule endoscopy based on color saliency: closer to an essential adjunct for reviewing software. Gastrointest Endosc. 2014;80:877-883.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 75]  [Cited by in F6Publishing: 66]  [Article Influence: 6.6]  [Reference Citation Analysis (0)]
63.  Liu G, Yan G, Kuang S, Wang Y. Detection of small bowel tumor based on multi-scale curvelet analysis and fractal technology in capsule endoscopy. Comput Biol Med. 2016;70:131-138.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 42]  [Cited by in F6Publishing: 29]  [Article Influence: 3.6]  [Reference Citation Analysis (0)]
64.  K G, C R. Heuristic Classifier for Observe Accuracy of Cancer Polyp Using Video Capsule Endoscopy. Asian Pac J Cancer Prev. 2017;18:1681-1688.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 2]  [Reference Citation Analysis (0)]
65.  Vieira PM, Freitas NR, Valente J, Vaz IF, Rolanda C, Lima CS. Automatic detection of small bowel tumors in wireless capsule endoscopy images using ensemble learning. Med Phys. 2020;47:52-63.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 10]  [Cited by in F6Publishing: 10]  [Article Influence: 2.0]  [Reference Citation Analysis (0)]
66.  Saito H, Aoki T, Aoyama K, Kato Y, Tsuboi A, Yamada A, Fujishiro M, Oka S, Ishihara S, Matsuda T, Nakahori M, Tanaka S, Koike K, Tada T. Automatic detection and classification of protruding lesions in wireless capsule endoscopy images based on a deep convolutional neural network. Gastrointest Endosc. 2020;92:144-151.e1.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 84]  [Cited by in F6Publishing: 91]  [Article Influence: 22.8]  [Reference Citation Analysis (1)]
67.  Das A, Nguyen CC, Li F, Li B. Digital image analysis of EUS images accurately differentiates pancreatic cancer from chronic pancreatitis and normal tissue. Gastrointest Endosc. 2008;67:861-867.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 69]  [Cited by in F6Publishing: 72]  [Article Influence: 4.5]  [Reference Citation Analysis (0)]
68.  Săftoiu A, Vilmann P, Dietrich CF, Iglesias-Garcia J, Hocke M, Seicean A, Ignee A, Hassan H, Streba CT, Ioncică AM, Gheonea DI, Ciurea T. Quantitative contrast-enhanced harmonic EUS in differential diagnosis of focal pancreatic masses (with videos). Gastrointest Endosc. 2015;82:59-69.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 94]  [Cited by in F6Publishing: 94]  [Article Influence: 10.4]  [Reference Citation Analysis (0)]
69.  Zhu M, Xu C, Yu J, Wu Y, Li C, Zhang M, Jin Z, Li Z. Differentiation of pancreatic cancer and chronic pancreatitis using computer-aided diagnosis of endoscopic ultrasound (EUS) images: a diagnostic test. PLoS One. 2013;8:e63820.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 54]  [Cited by in F6Publishing: 65]  [Article Influence: 5.9]  [Reference Citation Analysis (2)]
70.  Kuwahara T, Hara K, Mizuno N, Okuno N, Matsumoto S, Obata M, Kurita Y, Koda H, Toriyama K, Onishi S, Ishihara M, Tanaka T, Tajika M, Niwa Y. Usefulness of Deep Learning Analysis for the Diagnosis of Malignancy in Intraductal Papillary Mucinous Neoplasms of the Pancreas. Clin Transl Gastroenterol. 2019;10:1-8.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 61]  [Cited by in F6Publishing: 89]  [Article Influence: 22.3]  [Reference Citation Analysis (0)]
71.  Marya NB, Powers PD, Chari ST, Gleeson FC, Leggett CL, Abu Dayyeh BK, Chandrasekhara V, Iyer PG, Majumder S, Pearson RK, Petersen BT, Rajan E, Sawas T, Storm AC, Vege SS, Chen S, Long Z, Hough DM, Mara K, Levy MJ. Utilisation of artificial intelligence for the development of an EUS-convolutional neural network model trained to enhance the diagnosis of autoimmune pancreatitis. Gut. 2021;70:1335-1344.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 44]  [Cited by in F6Publishing: 59]  [Article Influence: 19.7]  [Reference Citation Analysis (1)]
72.  Minoda Y, Ihara E, Komori K, Ogino H, Otsuka Y, Chinen T, Tsuda Y, Ando K, Yamamoto H, Ogawa Y. Efficacy of endoscopic ultrasound with artificial intelligence for the diagnosis of gastrointestinal stromal tumors. J Gastroenterol. 2020;55:1119-1126.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 25]  [Cited by in F6Publishing: 24]  [Article Influence: 6.0]  [Reference Citation Analysis (0)]
73.  Marya NB, Powers PD, Fujii-Lau L, Abu Dayyeh BK, Gleeson FC, Chen S, Long Z, Hough DM, Chandrasekhara V, Iyer PG, Rajan E, Sanchez W, Sawas T, Storm AC, Wang KK, Levy MJ. Application of artificial intelligence using a novel EUS-based convolutional neural network model to identify and distinguish benign and malignant hepatic masses. Gastrointest Endosc. 2021;93:1121-1130.e1.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 14]  [Cited by in F6Publishing: 20]  [Article Influence: 6.7]  [Reference Citation Analysis (0)]
74.  Repici A, Wallace MB, East JE, Sharma P, Ramirez FC, Bruining DH, Young M, Gatof D, Irene Mimi Canto M, Marcon N, Cannizzaro R, Kiesslich R, Rutter M, Dekker E, Siersema PD, Spaander M, Kupcinskas L, Jonaitis L, Bisschops R, Radaelli F, Bhandari P, Wilson A, Early D, Gupta N, Vieth M, Lauwers GY, Rossini M, Hassan C. Efficacy of Per-oral Methylene Blue Formulation for Screening Colonoscopy. Gastroenterology. 2019;156:2198-2207.e1.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 44]  [Cited by in F6Publishing: 45]  [Article Influence: 9.0]  [Reference Citation Analysis (0)]
75.  Simonyan K, Zisserman A.   Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv preprint arXiv:1409.1556. 2014.  [PubMed]  [DOI]  [Cited in This Article: ]
76.  Huang G, Liu Z, van der Maaten L, Weinberger KQ.   Densely Connected Convolutional Networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017: 2261-2269.  [PubMed]  [DOI]  [Cited in This Article: ]
77.  He K, Zhang X, Ren S, Sun J.   Deep Residual Learning for Image Recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016: 770-778.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 72655]  [Cited by in F6Publishing: 17329]  [Article Influence: 2166.1]  [Reference Citation Analysis (0)]
78.  Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z.   Rethinking the Inception Architecture for Computer Vision. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016: 2818-2826.  [PubMed]  [DOI]  [Cited in This Article: ]
79.  Zhang X, Pan W, Xiao P.   In-Vivo Skin Capacitive Image Classification Using AlexNet Convolution Neural Network. In: 2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC), 2018: 439–443.  [PubMed]  [DOI]  [Cited in This Article: ]
80.  Antioquia AMC, Stanley Tan D, Azcarraga A, Cheng WH, Hua KL.   ZipNet: ZFNet-level Accuracy with 48× Fewer Parameters. In: 2018 IEEE Visual Communications and Image Processing (VCIP), 2018: 1-4.  [PubMed]  [DOI]  [Cited in This Article: ]
81.  Algorry AM, García AG, Wofmann AG.   Real-Time Object Detection and Classification of Small and Similar Figures in Image Processing. In: 2017 International Conference on Computational Science and Computational Intelligence (CSCI), 2017: 516-519.  [PubMed]  [DOI]  [Cited in This Article: ]