Minireviews Open Access
Copyright ©The Author(s) 2021. Published by Baishideng Publishing Group Inc. All rights reserved.
Artif Intell Gastrointest Endosc. Oct 28, 2021; 2(5): 198-210
Published online Oct 28, 2021. doi: 10.37126/aige.v2.i5.198
Artificial intelligence and early esophageal cancer
Ning Li, Shi-Zhu Jin
Ning Li, Shi-Zhu Jin, Department of Gastroenterology and Hepatology, The Second Affiliated Hospital of Harbin Medical University, Harbin 150086, Heilongjiang Province, China
ORCID number: Ning Li (0000-0002-0913-0624); Shi-Zhu Jin (0000-0003-3613-0926).
Author contributions: Li N wrote the paper and prepared the figures and tables; Jin SZ revised the paper.
Supported by Heilongjiang Province Education Science "13th Five-Year Plan" 2020 Key Project, No. GJB1320190.
Conflict-of-interest statement: The authors declare no conflicts of interest related to this article.
Open-Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See:
Corresponding author: Shi-Zhu Jin, MD, Chief Physician, Professor, Department of Gastroenterology and Hepatology, The Second Affiliated Hospital of Harbin Medical University, No. 246 Xuefu Road, Nangang District, Harbin 150086, Heilongjiang Province, China.
Received: July 28, 2021
Peer-review started: July 28, 2021
First decision: September 12, 2021
Revised: September 23, 2021
Accepted: October 27, 2021
Article in press: October 27, 2021
Published online: October 28, 2021


The development of esophageal cancer (EC) from early to advanced stage results in a high mortality rate and poor prognosis. Advanced EC not only poses a serious threat to the life and health of patients but also places a heavy economic burden on their families and society. Endoscopy is of great value for the diagnosis of EC, especially in the screening of Barrett’s esophagus and early EC. However, at present, endoscopy has a low diagnostic rate for early tumors. In recent years, artificial intelligence (AI) has made remarkable progress in the diagnosis of digestive system tumors, providing a new model for clinicians to diagnose and treat these tumors. In this review, we aim to provide a comprehensive overview of how AI can help doctors diagnose early EC and precancerous lesions and make clinical decisions based on the predicted results. We analyze and summarize the recent research on AI and early EC. We find that based on deep learning (DL) and convolutional neural network methods, the current computer-aided diagnosis system has gradually developed from in vitro image analysis to real-time detection and diagnosis. Based on powerful computing and DL capabilities, the diagnostic accuracy of AI is close to or better than that of endoscopy specialists. We also analyze the shortcomings in the current AI research and corresponding improvement strategies. We believe that the application of AI-assisted endoscopy in the diagnosis of early EC and precancerous lesions will become possible after the further advancement of AI-related research.

Key Words: Artificial intelligence, Computer-aided diagnosis, Deep learning, Convolutional neural network, Barrett’s esophagus, Early esophageal cancer

Core Tip: The early diagnosis and early treatment of esophageal cancer (EC) have always been a hot spot in clinical medicine research and are of great importance to the prognosis of patients. With continuous improvements in computer technology and the arrival of the era of big data, the artificial intelligence (AI)-assisted endoscopic diagnosis of EC has also flourished. This review mainly introduces the research progress of AI-assisted endoscopy in the diagnosis of Barrett’s esophagus and early EC.


Barrett’s esophagus (BE) is a premalignant condition characterized by the replacement of columnar epithelium with esophageal squamous epithelium. Esophageal cancer (EC) is the seventh most common cancer and the sixth leading cause of cancer-related mortality worldwide[1]. EC mainly consists of two histological types: Esophageal squamous cell carcinoma (ESCC) and esophageal adenocarcinoma (EAC). ESCC is the main pathological type in Asian countries, and the 5-year survival rate is less than 20%[2]. EAC is more common in Western countries, and its incidence has been on the rise globally in recent years[3]. The development of EC from early to advanced stage is accompanied by a high mortality rate and poor prognosis. Early detection and diagnosis greatly impact the prognosis of EC. The need for more efficient detection methods for early EC has led to in-depth research in the field of artificial intelligence (AI). The purpose of this review is to summarize the diagnostic value of AI for BE and early EC, which is conducive to the early treatment of patients and the reduction in mortality. In this review, we will discuss the following: (1) The utility of AI techniques in the endoscopic detection of BE; (2) the utility of AI techniques in the endoscopic detection of early EC; and (3) problems and prospects of AI-assisted endoscopic diagnosis.


AI refers to the abilities of computers to imitate the cognitive function of the human mind and conduct autonomous learning. In recent years, AI has made great progress in various fields of medicine, such as radiological oncology, diabetic retinopathy, and skin cancer[4-6]. Machine learning (ML) for AI can be roughly divided into traditional learning and deep learning (DL). Traditional learning methods require artificial design, which is time-consuming and laborious. DL methods can independently extract and learn image features and extract more complex and abstract advanced features layer by layer through a multilayer system, which allows them to be truly mature and be applied in clinical practice[7]. Convolutional neural networks (CNNs) are a kind of DL method commonly used in AI-assisted image recognition. These networks contain multilayer perceptrons and imitate human brain neural circuits to carry out high generalization, abstraction, and synthesis to process information. The DL method is an end-to-end learning method without the need to design specific image features[8,9]. With the rapid development of information technology, DL has received increasing attention in the medical field. The computer-aided diagnosis (CAD) of gastrointestinal (GI) diseases has become a hot research topic. CAD is an advanced technology used to preprocess endoscopic images, extract image features, process data, and obtain diagnostic results with the help of computer algorithms and graphics processing technology[10] (Figure 1).

Figure 1
Figure 1 Diagram representation of artificial intelligence domains. ML: Machine learning; DL: Deep learning; CAD: Computer-aided diagnosis; CNN: Convolutional neural network.

BE is a result of chronic inflammation of the esophagus and is a risk factor for the development of EAC[11]. GI societies recommend regular endoscopy for BE patients to detect dysplasia or carcinoma early[12,13]. Endoscopic surveillance currently follows the Seattle protocol: Patients with BE are required to undergo a systematic four-quadrant biopsy, in which the entire BE area is sampled at intervals of 1-2 cm using a "turn and suction technique"[14]. However, this method can be invasive, costly, time-consuming, and difficult for patients to follow[15]. Due to poor patient compliance with the Seattle protocol, the American Society of Gastrointestinal Endoscopy established a performance threshold for optical diagnosis. Random biopsies can be replaced if targeted biopsies assisted by any imaging technique have a per-patient sensitivity of 90%, negative predictive value (NPV) of 98%, and specificity of 80%[16]. However, these requirements can only be achieved by experts.

In addition, early neoplastic lesions and dysplasia are subtle, showing focal distribution, and are difficult to detect endoscopically[17]. Cases of BE progression to early tumors are rare, and endoscopic surveillance is generally carried out in community hospitals; therefore, general endoscopists may not be familiar with these lesions, and this lack of familiarity is an important reason for missed diagnosis[18,19]. In recent years, to improve the diagnosis of BE, many new endoscopic techniques have been developed, such as magnification endoscopy (ME), chromoendoscopy, confocal laser endomicroscopy, and volumetric laser endomicroscopy, most of which are expensive and take a long time for endoscopists to learn[20,21]. Differences in endoscopists' interpretations of the images can also lead to differences in diagnosis[22]. Therefore, there is an urgent need for a practical tool to improve the accuracy of endoscopists in the clinic. Moreover, the endoscopist's diagnosis may be influenced by the time of the endoscopy, psychological state, time pressure, and cumbersome procedures. However, AI has a short learning time and, unlike endoscopists, does not suffer from fatigue easily; therefore, it has good application prospects (Table 1).

Table 1 Application of artificial intelligence in endoscopic detection of Barrett’s esophagus.
Target disease
Endoscopic modality
AI technology
van der Sommen et al[23], 2016Early neoplasia in BEWLISVM100 imagesPer-image sensitivity 83%/specificity 83%; Per-patient sensitivity 86%/specificity: 87%
Struyvenberg et al[24], 2021BEWLI/NBICNNTrain 494364 images/1247 images; test 183 images/157 videosImages: Accuracy 84%/sensitivity 88%/specificity 78%; Videos: Accuracy 83%/sensitivity 85%/specificity 83%
de Groof et al[25], 2020Early neoplasia in BEWLIResNet-UNetTrain 1544 images; test 160 imagesDataset 4: Accuracy 89%/sensitivity 90%/specificity 88%; Dataset 5: Accuracy 88%/sensitivity 93%/specificity 83%
de Groof et al[26], 2020Barrett’s neoplasiaWLIResNet-UNetTrain 1544 images; test 20 patientsAccuracy 90%/sensitivity 91%/specificity 89%
Hong et al[27], 2017BEEndomicroscopyCNNTrain 236 images; test 26 imagesAccuracy 80.77%
Hashimoto et al[28], 2020Early neoplasia in BEWLI/NBICNNTrain 1832 images; test 458 imagesAccuracy 95.4%/sensitivity 96.4%/ specificity 94.2%
de Groof et al[29], 2019Barrett’s neoplasiaWLISVM60 imagesAccuracy 92%/sensitivity 95%/specificity 85%
Computer-aided diagnosis using white light imaging/narrow band imaging

van der Sommen et al[23] collected 100 images from 44 BE patients and created an ML algorithm called support vector machine (SVM), which employed specific texture and color filters to detect early neoplasia in BE. The sensitivity and specificity of the system were both 83% for the per-image analysis and 86% and 87% for the per-patient analysis, respectively.

Struyvenberg et al[24] developed a CAD system based on a CNN model that was first trained with 494364 images and then further trained with 690 BE neoplasia and 557 nondysplastic BE (NDBE) white light imaging (WLI) images. Next, 112 BE neoplasia and 71 NDBE narrow band imaging (NBI) zoom images were used for training and validation. Finally, 59 BE neoplasia and 98 NDBE NBI zoom videos were used for training and validation. Fourfold cross-validation was used to evaluate the detection performance of the CAD system. The results showed that the accuracy, sensitivity, and specificity of the NBI zoom image based CAD system were 84%, 88%, and 78%, respectively. Accuracy, sensitivity, and specificity of the NBI zoom videos were 83%, 85%, and 83%, respectively.

de Groof et al[25,26] developed a CAD system based on ResNet/U-Net model to help endoscopists detect early BE neoplasia. The system was trained with 1544 endoscopic images of BE neoplasia and NDBE and then validated on 160 images. In an in vitro study, the accuracy, sensitivity, and specificity of the CAD system for detecting early BE neoplasia were 89%, 90%, and 88% in dataset 4, and 88%, 93% and 83% in dataset 5, respectively. Compared with 53 nonspecialist endoscopists, the CAD system outperformed them in terms of accuracy and sensitivity. In an in vivo evaluation of the CAD system, endoscopic examinations were performed on ten patients with NDBE and ten patients with BE neoplasia. The images obtained by WLI were analyzed immediately by the CAD system and used to provide feedback to the endoscopist. The accuracy, sensitivity, and specificity of the CAD system were 90%, 91% and 89%, respectively. Therefore, the CAD system has a high accuracy for tumor detection and low false positive rate; thus, the CAD system can be tested in larger and multicenter trials.

Hong et al[27] constructed a CNN-based CAD system to distinguish intestinal metaplasia (IM), gastric metaplasia (GM), and BE neoplasia. The researchers obtained 236 endoscopic images of BE from the 2016 International Symposium on Biomedical Imaging using 155 IM, 26 GM, and 55 BE neoplasia samples as a training set. Because the number of images in the training set was insufficient, the researchers implemented image distortion to achieve data enhancement and increase the sample size of the data. Then, 26 images, including 17 IM, 4 GM, and 5 BE neoplasia images, were used as the verification set. The results showed that the accuracy of the CAD system for the classification of IM, GM, and BE neoplasia was 80.77%. Although the number of images was small, this study suggested that the CNN-structured CAD system can be applied to the classification of esophageal lesions.

Real-time recognition by computer-aided diagnosis

Hashimoto et al[28] collected 916 images from 70 patients with early neoplastic BE and 916 control images from 30 normal BE patients and then trained a CNN algorithm on ImageNet. The researchers analyzed 458 images using the CNN algorithm. The accuracy, sensitivity, and specificity of the system for detecting early neoplastic BE were 95.4%, 96.4%, and 94.2%, respectively.

de Groof et al[29] designed an ML algorithm called SVM based on WLI images from 40 BE neoplasias patients and 20 NDBEs patients. All of the images were delineated by endoscopic experts, with overlapping areas of at least four delineations marked as "sweet spots" and areas with at least one delineation marked as "soft spots". The CAD system was trained (color and texture features) and then evaluated for its performance using leave-one-out cross-validation. The accuracy, sensitivity, and specificity of the CAD system were 92%, 95%, and 85%, and the localization and labeling of soft spots were 100% and 90%, respectively. Therefore, this CAD system can detect and locate early BE neoplasia with a high accuracy on WLI images, which lays a foundation for the real-time automatic recognition of BE neoplasia in the future.


EC is usually diagnosed at an advanced stage, and the main treatment is esophagectomy. Surgical treatment is a highly invasive treatment with relatively high mortality and recurrence rates and poor patient prognoses. However, if EC is detected at an early stage, the prognosis can be improved by endoscopic resection[30,31]. Therefore, the early diagnosis of EC is essential for favorable treatment. Some studies have applied certain dyes to the esophageal mucosa that can more clearly reveal the surface vasculature and neoplasia. The most commonly used dyes are acetic acid, iodine, indigo carmine, and methylene blue. However, there are limitations in terms of the cost and complexity of their application[32,33]. NBI provides a better view of intrapapillary capillary loops (IPCLs) and is used to detect superficial ESCC. However, inexperienced endoscopists are still prone to missed diagnoses[34-36]. Therefore, AI, which can outperform humans in image recognition, is expected to be used in the field of EC diagnosis (Table 2).

Table 2 Application of artificial intelligence in endoscopic detection of early esophageal cancer.
Target disease
Endoscopic modality
AI technology
Ebigbo et al[37], 2019EACWLI/NBICNN248 imagesAugsburg database: Sensitivity 97%/specificity 88% (WLI); Sensitivity 94%/specificity 80% (NBI); MICCAI database: Sensitivity 92%/specificity 100%
Ebigbo et al[38], 2020EACWLICNNTrain 129 images; test 62 imagesAccuracy 89.9%/sensitivity 83.7%/specificity 100%
Horie et al[39], 2019ECWLI/NBICNNTrain 8428 images; test 1118 imagesAccuracy 98%/sensitivity 98%
Cai et al[40], 2019ESCCWLIDNNTrain 2428 images; test 187 imagesAccuracy 91.4%/sensitivity 97.8%/specificity 85.4%
Ohmori et al[41], 2020ESCCWLI/NBI/BLICNNTrain 22562 images; test 727 imagesNon-ME: Accuracy 81.0%/sensitivity 90%/specificity 76% (WLI); Accuracy 77%/sensitivity 100%/specificity 63% (NBI/BLI); ME: Accuracy 77%/sensitivity 98%/specificity 56%
Liu et al[42], 2020ECWLICNNTrain 1017 images; test 255 imagesAccuracy 85.83%/sensitivity 94.23%/specificity 94.67%
Kumagai et al[43], 2019ESCCECSCNNTrain 4715 images; test 1520 imagesAccuracy 90.9%/sensitivity 92.6%/specificity 89.3%
Guo et al[44], 2020ESCCNBICNNTrain 6473 images; test 6671 images and 80 videosImages: Sensitivity 98.04%/specificity 95.03%; videos: Non-ME sensitivity 60.8% (per frame)/100% (per lesion); ME sensitivity 96.1% (per frame)/100% (per lesion)
Tokai et al[46], 2020ESCCWLI/NBICNNTrain 1751 images; test 291 imagesAccuracy 80.9%/sensitivity 84.1%/specificity 73.3%
Nakagawa et al[47], 2019ESCCWLI/NBICNNTrain 14338 images; test 914 imagesAccuracy 91%/sensitivity 90.1%/specificity 95.8%
Zhao et al[48], 2019ESCCNBIDouble-labeling FCN1350 imagesLesion level: Accuracy 89.2%; pixel level: Accuracy 93%
Everson et al[49]ESCCNBICNN7046 imagesAccuracy 93.7%/sensitivity 89.3%/specificity 98%
Uema et al[50], 2021ESCCNBICNNTrain 1777 images; test 747 imagesAccuracy 84.2%
Fukuda et al[51], 2020ESCCNBI/BLICNNTrain 28333 images; test 144 patientsAccuracy 63%/sensitivity 91%/specificities 51% (detection); accuracy 88%/sensitivity 86%/specificities 89% (characterization)
Shimamoto et al[52], 2020ESCCWLI/NBI/BLICNNTrain 23977 images; test 102 videosNon-ME: Accuracy 87%/sensitivity 50%/specificity 99%; ME: Accuracy 89%/sensitivity 71%/specificity 95%
Waki et al[53], 2021ESCCWLI/NBI/BLICNNTrain 18797 images; test 100 videosSensitivity 85.7%/specificity 40%
Detection of lesions

Ebigbo et al[37] created a CAD system based on CNN. In the Augsburg database, the sensitivity and specificity of the CAD system for the diagnosis of EAC in WLI images were 97% and 88%, respectively, and the sensitivity and specificity in NBI images were 94% and 80%, respectively. In the MICCAI database, the sensitivity and specificity of the CAD system for the diagnosis of EAC in WLI images were 92% and 100%, respectively. Then, Ebigbo et al[38] developed an artificial neural network of encoder-decoders with 101 layers of ResNet and trained the CAD system using 129 endoscopic images from the Augsburg database. The researchers evaluated 62 images using the CAD system, including 36 images of early EAC and 26 images of BE. Although the number of patients evaluated was low, real-time monitoring of EAC demonstrated good results. The sensitivity and specificity of the system were 83.7% and 100%, respectively, and the overall accuracy was 89.9%.

Horie et al[39] developed a CNN system using DL to correctly detect EC based on 8428 images from 384 patients with EC. The researchers used the CNN system to analyze 1118 images (47 patients with EC and 50 patients without EC). The system takes 27 s and has a sensitivity of 98%; it can detect EC lesions less than 10 mm in size. The NPV was 95%, but the positive predictive value was only 40%. This may be due to the small number of DL training sets and few images from patients with esophageal inflammation. In addition, the system can distinguish between superficial EC and advanced EC with a 98% accuracy. These results indicate that the CNN system constructed by researchers can accurately analyze a large number of endoscopic images in a short period of time, which is conducive to the early diagnosis of EC.

Cai et al[40] developed a CAD system using a deep neural network based on 2428 endoscopic images (746 patients) with the aim of identifying early ESCC from WLI images. Among these images, there were 1332 ESCC images and 1096 normal tissue images. The researchers evaluated the CAD system using 187 images (52 patients), and 16 endoscopic physicians reviewed the images. The results showed that the accuracy, sensitivity, and specificity of the CAD system for the early diagnosis of ESCC were 91.4%, 97.8% and 85.4%, respectively. With the help of the CAD system, the diagnostic accuracy and sensitivity of endoscopists with different seniority levels were improved, especially for those with less seniority. This result indicates that AI-assisted digestive endoscopy can reduce the rate of missed diagnosis and improve the diagnostic level of endoscopists with different experiences in early EC.

Ohmori et al[41] developed a CAD system based on CNN to evaluate the diagnosis of ESCC under ME and non-ME. The researchers used 7844 ME and 9591 non-ME images from ESCC and 3435 ME and 1692 non-ME images from noncancerous or normal esophagi as a training set. Then, 255 non-ME WLI images, 268 non-ME-NBI/blue laser imaging (BLI) images, and 204 ME-NBI/BLI images of ESCC were used as a validation set. The accuracy, sensitivity, and specificity of the CAD system were 81%, 90%, and 76%, respectively, in non-ME WLI images. In the non-ME diagnosis of NBI/BLI images, the accuracy, sensitivity, and specificity of the CAD system were 77%, 100%, and 63%, respectively. In the diagnosis of ME, the CAD system had an accuracy of 77%, sensitivity of 98%, and specificity of 56%. In conclusion, the diagnosis of ESCC with the CAD system was not significantly different from that of experienced endoscopists.

Liu et al[42] developed a CNN model using the DL approach to distinguish among normal esophagi, precancerous lesions, and EC. The model consists of two subnetworks: The O-stream and the P-stream. In the application process, the O-stream is used to input the original images to extract color changes and overall features, and the P-stream is used to input the preprocessing images to lift texture changes and detail features. In total, 1017 images (normal esophagi, precancerous lesions, and EC) were used as the training set, and 255 images (normal esophagi, precancerous lesions, and EC) were used as the validation set. The results showed that the accuracy, sensitivity, and specificity of the CNN model were 85.83%, 94.23% and 94.67%, respectively, which shows good prospects in the diagnosis of esophageal lesions.

Kumagai et al[43] constructed an AI model based on CNN with GoogLeNet to judge benign and malignant endocytoscopic system (ECS) images with different degrees of magnification. The AI system was trained using 4715 esophageal ECS images (1141 malignant and 3574 nonmalignant) and validated using 1520 images (27 ESCCs and 28 nonmalignant lesions). The results showed that the sensitivity of the AI system was 92.6% for the diagnosis of ESCC, the specificity was 89.3% for the diagnosis of nonmalignant lesions, and the accuracy was 90.9% for the overall diagnosis. Early EC under endoscopy usually presents as slight swelling, depression, or a color change in the mucosa, which is difficult to diagnose, especially for less experienced endoscopists. The above research results indicate that AI has good auxiliary value for the endoscopic diagnosis of early EC and its precancerous lesions and plays an important role in guiding learning for the applications of some new standards and technologies.

Scope of lesions

Guo et al[44] developed a CAD system based on CNN for the real-time detection of precancerous lesions and ESCC. A total of 6473 NBI images were used to train the CAD system, and endoscopic static images and dynamic videos were used to validate the CAD system. Each input endoscopic image generates an AI probabilistic heat map, where yellow indicates highly suspected cancerous lesions and blue indicates noncancerous lesions. When the CAD system is used to detect canceration, the identified tumor area is covered with color. The CAD system was used to diagnose 1480 malignant NBI images and 5191 nonmalignant NBI images with a sensitivity and specificity of 98.04% and 95.03%, respectively. In 27 non-ME and 20 ME videos of precancerous lesions and early ESCC, the sensitivities per frame were 60.8% and 96.1%, respectively, and the sensitivities per lesion were 100% and 100%. In 33 normal esophageal videos, the specificities were 99.9% per frame and 90.9% per case. The AI model can mark the location and range of lesions according to the input images, and the range is roughly the same as that marked by endoscopists. This finding indicates the feasibility and great potential of AI in the identification of a range of precancerous or early EC lesions.

Depth of lesions

The depth of EC invasion is a key factor affecting treatment decisions. In principle, endoscopic resection can be performed for intraepithelial esophageal lesions confined to the lamina propria or muscularis mucosa and/or lesions with a submucosal infiltration depth less than 200 μm. Surgical resection and chemoradiotherapy are required for lesions larger than 200 μm. Therefore, accurate determination of the depth of infiltration can avoid the impact of overtreatment on patient quality of life[45].

Tokai et al[46] collected 1751 ESCC images to design an AI diagnostic system using CNN techniques. The system used DI technology to evaluate the infiltration depth of ESCC. The researchers used the AI system to evaluate 55 patients (291 images) and compared them with the evaluations of 13 endoscopists. It was found that the detection rate of the AI system for ESCC was 95.5%, taking 10 s. In the images with ESCC detected, the accuracy, sensitivity, and specificity of the assessment of infiltration depth were 80.9%, 84.1%, and 73.3%, respectively, taking 6 s. Moreover, the AI system was more accurate than 12 of the 13 endoscopists. This result indicates that the AI system has great potential in detecting the infiltration depth of ESCC.

Nakagawa et al[47] developed a CNN-based AI system to assess the infiltration depth of ESCC. The researchers trained the AI system with images from 804 EC patients (8660 non-ME images and 5678 ME images) and then validated the system with images from 155 patients (405 non-ME images and 509 ME images). The accuracy, sensitivity, and specificity of the system were 91%, 90.1%, and 95.8%, respectively. When 16 endoscopists evaluated the same images, the accuracy, sensitivity, and specificity were 89.6%, 89.8%, and 88.3%, respectively. These results suggest that the AI system performs well in assessing the depth of ESCC infiltration, even better than endoscopists.

IPCLs are the hallmark of ESCC, and their morphologic changes correlate with the depth of tumor invasion. Zhao et al[48] used the ME-NBI technique to evaluate patients' esophageal conditions and established a CAD system for the automatic classification of IPCLs based on endoscopic diagnosis and histological analysis. This system uses a double-labeling fully convolutional network to evaluate 1350 images with 1383 lesions and compare them with the evaluations of endoscopists. The results showed that the diagnostic accuracy of the system was 89.2% at the lesion level and 93% at the pixel level, which were higher than those of endoscopists.

Everson et al[49] developed an AI system to detect the presence and stage of early ESCC lesions. A total of 7046 ME-NBI images from 17 patients were used to train the CNN. Among these patients, ten had early ESCC, and seven had a normal esophagus. All of the imaging areas were supported by histological results. Studies have shown that the accuracy of this CNN system for distinguishing normal and abnormal IPCL patterns is 93.7%, and the sensitivity and specificity for distinguishing abnormal IPCL patterns are 89.3% and 98%, respectively. Therefore, the CNN system can relatively accurately distinguish normal and abnormal IPCL patterns and may provide guidance for decision-making regarding the clinical treatment of ESCC.

Uema et al[50] constructed a CNN (ResNeXt-101) model to classify ESCC microvessels. The study used 1777 ESCC images under ME-NBI as a training set and 747 ESCC images under ME-NBI as a validation set (validated by the CAD system and 8 endoscopists). The results showed that the accuracy of the CAD system for microvascular classification was 84.2%, which was higher than the average accuracy achieved by endoscopists. Therefore, this CAD system has good application potential for ESCC microvascular classification.

Dynamic images

Fukuda et al[51] developed a CNN-based CAD system to diagnose ESCC. The researchers used 23746 ESCC images (1544 patients) and 4587 noncancerous images (458 patients) as a training set. Video image clips from 144 patients were used as a validation set, and then 13 endoscopic specialists used the same videos for diagnoses. The accuracy, sensitivity, and specificity of the CAD system in identifying suspicious lesions were 63%, 91%, and 51%, respectively. The accuracy, sensitivity, and specificity in differentiating cancerous from noncancerous lesions were 88%, 86% and 89%, respectively. In previous studies, the diagnosis of ESCC by CAD systems was mainly based on static images, with few video images. Because video images are affected by many factors, such as distance, angle, breathing movement, and esophageal motility, using a CAD system to analyze video images is more challenging. Fukuda et al[51] demonstrated that compared with endoscopic experts, CAD systems are more sensitive to ESCC detection and have a significantly higher accuracy and specificity in differentiating cancer from noncancer, which will provide valuable clinical support for endoscopists in their diagnoses.

Using 23977 ESCC images (6857 WLI images and 17120 NBI/BL images) as a training set, Shimamoto et al[52] developed a CNN-based AI system to assess the infiltration depth of ESCC. The AI system was then validated on 102 video images, while some endoscopic specialists were invited to view the same video images for diagnoses. The study showed that the accuracy, sensitivity, and specificity of AI for ME diagnosis were 89%, 71%, and 95%, respectively, and those for non-ME diagnosis were 87%, 50%, and 99%. Compared with the diagnostic parameters of endoscopic experts, those of the AI system were mostly higher. This suggests that AI system can provide useful support during endoscopy.

Waki et al[53] constructed an AI system based on CNN with 17336 images of ESCC (1376 patients) and 1461 images of noncancerous/normal esophagi (196 patients). While recording the verification video, the endoscopic operator passed through the esophagus at a constant speed to simulate a situation when a lesion was missed. A total of 100 videos (50 ESCCs, 22 noncancerous esophagi, and 28 normal esophagi) were then evaluated by the AI system and 21 endoscopists. The study showed that the sensitivity and specificity of the AI system for ESCC diagnosis were 85.7% and 40%, respectively, and those of the endoscopists were 75% and 91.4%, respectively. With the help of the AI system, the diagnostic specificity of the endoscopists was almost the same, but the sensitivity was improved. Therefore, the AI system, as an auxiliary tool, plays an important role in the diagnosis of ESCC by endoscopists.


With continuous improvements in endoscopic technology and the diagnostic levels of AI, the combination of AI and endoscopy has become popular. Although AI has made some achievements in the diagnosis of esophageal precancerous lesions and early EC, there are still some problems.

False positive and false negative results

First, almost all AI diagnostic systems yield some false negative results. Small lesions are easily missed in clinical practice, so it is crucial to improve the detection accuracy of these easily neglected lesions. In addition, the AI diagnostic system yields false positive results, which can lead to overtreatment. Shadowed portions, color changes in the gastric antrum and pylorus, and changes in the normal tissue structure and benign lesions (scarring, local atrophy, inflammation, ectopic esophagus, and gastric mucosa) are all reasons for false positive results[54]. To solve this problem, on the one hand, a large number of high-quality endoscopic images should be accumulated for computer algorithm training and verification to produce more accurate results. On the other hand, endoscopic videos often contain more low-resolution real images, which are difficult to capture in still pictures. The use of a large number of images taken from videos as learning materials can reduce the rates of false positives and false negatives to a certain extent.

Retrospective experimental studies have a single source of learning materials, and prospective experimental studies are lacking

At present, most of the training data sets and validation data sets of AI systems have been derived from the same batch of data from the same center. Although the accuracy of AI systems has been internally verified, there is still a lack of external verification[55]. The resolution of examination images obtained by different types of endoscopes varies greatly among different devices. Therefore, future studies should try to include endoscopic image data from multiple institutions, multiple models, and multiple devices to ensure the repeatability of the research results.

In addition, most of the current studies are retrospective, and researchers tend to select clear and high-quality endoscopic images after excluding low-quality images caused by interference factors (such as bleeding, mucus secretion, and food interference), thus resulting in selection bias. This bias often causes the results of retrospective trials to outperform the actual results in clinical applications[56]. In the future, a large number of prospective studies should be carried out to continuously improve AI systems and improve their accuracy, sensitivity, and specificity for clinical trials to lay a solid foundation for the real-time clinical application of AI.

Lack of endoscopic video-assisted diagnoses

Currently, most AI systems are based on the processing of static data rather than the modeling of dynamic videos. Static images are mostly taken after the mucosal environment is well prepared and the lesion location is determined. Due to the lack of environmental impact caused by poor preparation of the mucosal environment and endoscopic movement in dynamic videos, information is missing[57]. There is a large gap between AI training sets and the actual endoscopic working environment, which affects the clinical applicability of AI to some extent. The application of video sets can better solve the above problems. Moreover, endoscopic video analysis can be used for secondary review after real-time endoscopy to quickly identify and screen esophageal diseases and reduce the number of missed diagnoses, as this type of analysis has considerable development potential in DL-assisted endoscopy in the future.

Prospects for development

CAD system based on DL technology has gained increasing attention and is closely related to the good development prospects of DL technology applied in real-time endoscopy. CAD can indicate the lesion site in real-time endoscopic examinations, provide an accurate classification, and serve as a second observer to assist in disease diagnosis. In low-resource or densely populated areas, CAD is used for population-based endoscopic screening, which can avoid missed diagnosis or misdiagnosis of diseases caused by endoscopists' lack of experience and professional knowledge or heavy work fatigue. CAD can be used to train new endoscopists who lack experience, provide them with professional knowledge training, and improve their professional skills. CAD can also be performed online to provide more professional endoscopic diagnoses in areas where experienced endoscopists are lacking, making it easier for patients to visit local hospitals.


Most of the current research is still focused on early system development and feasibility studies, but subsequent product development has not followed. The CAD system based on DL is still in the experimental research stage. Therefore, in the future, a large number of high-quality prospective experimental studies should be carried out in combination with high-quality algorithms and frameworks with more powerful functions, higher efficiency, and better stability. With the establishment of a standardized and large sample data center, the CAD system can provide endoscopic physicians with more accurate diagnosis and treatment options, auxiliary teaching, auxiliary assessments, and telemedicine for early EC. An increasing number of patients and physicians will benefit from the progress of the CAD system.


Manuscript source: Invited manuscript

Specialty type: Gastroenterology and hepatology

Country/Territory of origin: China

Peer-review report’s scientific quality classification

Grade A (Excellent): A

Grade B (Very good): 0

Grade C (Good): C

Grade D (Fair): 0

Grade E (Poor): 0

P-Reviewer: Shafqat S, Viswanath YK S-Editor: Liu M L-Editor: Wang TQ P-Editor: Liu M

1.  Bray F, Ferlay J, Soerjomataram I, Siegel RL, Torre LA, Jemal A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin. 2018;68:394-424.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 28752]  [Cited by in F6Publishing: 28612]  [Article Influence: 7188.0]  [Reference Citation Analysis (3)]
2.  Ferlay J, Colombet M, Soerjomataram I, Mathers C, Parkin DM, Piñeros M, Znaor A, Bray F. Estimating the global cancer incidence and mortality in 2018: GLOBOCAN sources and methods. Int J Cancer. 2019;144:1941-1953.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2315]  [Cited by in F6Publishing: 2114]  [Article Influence: 578.8]  [Reference Citation Analysis (1)]
3.  Thrift AP. The epidemic of oesophageal carcinoma: Where are we now? Cancer Epidemiol. 2016;41:88-95.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 126]  [Cited by in F6Publishing: 113]  [Article Influence: 21.0]  [Reference Citation Analysis (0)]
4.  Bibault JE, Giraud P, Burgun A. Big Data and machine learning in radiation oncology: State of the art and future prospects. Cancer Lett. 2016;382:110-117.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 141]  [Cited by in F6Publishing: 104]  [Article Influence: 23.5]  [Reference Citation Analysis (0)]
5.  Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, Thrun S. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542:115-118.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 4078]  [Cited by in F6Publishing: 2466]  [Article Influence: 815.6]  [Reference Citation Analysis (0)]
6.  Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, Venugopalan S, Widner K, Madams T, Cuadros J, Kim R, Raman R, Nelson PC, Mega JL, Webster DR. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA. 2016;316:2402-2410.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2384]  [Cited by in F6Publishing: 1607]  [Article Influence: 476.8]  [Reference Citation Analysis (0)]
7.  Le Berre C, Sandborn WJ, Aridhi S, Devignes MD, Fournier L, Smaïl-Tabbone M, Danese S, Peyrin-Biroulet L. Application of Artificial Intelligence to Gastroenterology and Hepatology. Gastroenterology. 2020;158:76-94.e2.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 108]  [Cited by in F6Publishing: 96]  [Article Influence: 36.0]  [Reference Citation Analysis (0)]
8.  Shin HC, Roth HR, Gao M, Lu L, Xu Z, Nogues I, Yao J, Mollura D, Summers RM. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Trans Med Imaging. 2016;35:1285-1298.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2117]  [Cited by in F6Publishing: 452]  [Article Influence: 352.8]  [Reference Citation Analysis (0)]
9.  Yamashita R, Nishio M, Do RKG, Togashi K. Convolutional neural networks: an overview and application in radiology. Insights Imaging. 2018;9:611-629.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 551]  [Cited by in F6Publishing: 217]  [Article Influence: 137.8]  [Reference Citation Analysis (0)]
10.  Mori Y, Kudo SE, Mohmed HEN, Misawa M, Ogata N, Itoh H, Oda M, Mori K. Artificial intelligence and upper gastrointestinal endoscopy: Current status and future perspective. Dig Endosc. 2019;31:378-388.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 43]  [Cited by in F6Publishing: 38]  [Article Influence: 14.3]  [Reference Citation Analysis (0)]
11.  Soh YSA, Lee YY, Gotoda T, Sharma P, Ho KY; Asian Barrett's Consortium. Challenges to diagnostic standardization of Barrett's esophagus in Asia. Dig Endosc. 2019;31:609-618.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 7]  [Cited by in F6Publishing: 7]  [Article Influence: 2.3]  [Reference Citation Analysis (0)]
12.  Fitzgerald RC, di Pietro M, Ragunath K, Ang Y, Kang JY, Watson P, Trudgill N, Patel P, Kaye PV, Sanders S, O'Donovan M, Bird-Lieberman E, Bhandari P, Jankowski JA, Attwood S, Parsons SL, Loft D, Lagergren J, Moayyedi P, Lyratzopoulos G, de Caestecker J; British Society of Gastroenterology. British Society of Gastroenterology guidelines on the diagnosis and management of Barrett's oesophagus. Gut. 2014;63:7-42.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 733]  [Cited by in F6Publishing: 584]  [Article Influence: 81.4]  [Reference Citation Analysis (0)]
13.  Shaheen NJ, Falk GW, Iyer PG, Gerson LB; American College of Gastroenterology. ACG Clinical Guideline: Diagnosis and Management of Barrett's Esophagus. Am J Gastroenterol. 2016;111:30-50; quiz 51.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 793]  [Cited by in F6Publishing: 630]  [Article Influence: 113.3]  [Reference Citation Analysis (0)]
14.  Wani S, Gaddam S. Editorial: Best Practices in Surveillance of Barrett's Esophagus. Am J Gastroenterol. 2017;112:1056-1060.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 11]  [Cited by in F6Publishing: 10]  [Article Influence: 2.2]  [Reference Citation Analysis (0)]
15.  Tavakkoli A, Appelman HD, Beer DG, Madiyal C, Khodadost M, Nofz K, Metko V, Elta G, Wang T, Rubenstein JH. Use of Appropriate Surveillance for Patients With Nondysplastic Barrett's Esophagus. Clin Gastroenterol Hepatol. 2018;16:862-869.e3.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 12]  [Cited by in F6Publishing: 10]  [Article Influence: 3.0]  [Reference Citation Analysis (0)]
16.  Sharma P, Savides TJ, Canto MI, Corley DA, Falk GW, Goldblum JR, Wang KK, Wallace MB, Wolfsen HC; ASGE Technology and Standards of Practice Committee. The American Society for Gastrointestinal Endoscopy PIVI (Preservation and Incorporation of Valuable Endoscopic Innovations) on imaging in Barrett's Esophagus. Gastrointest Endosc. 2012;76:252-254.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 91]  [Cited by in F6Publishing: 82]  [Article Influence: 9.1]  [Reference Citation Analysis (0)]
17.  Schölvinck DW, van der Meulen K, Bergman JJGHM, Weusten BLAM. Detection of lesions in dysplastic Barrett's esophagus by community and expert endoscopists. Endoscopy. 2017;49:113-120.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 7]  [Cited by in F6Publishing: 12]  [Article Influence: 1.2]  [Reference Citation Analysis (0)]
18.  Hvid-Jensen F, Pedersen L, Drewes AM, Sørensen HT, Funch-Jensen P. Incidence of adenocarcinoma among patients with Barrett's esophagus. N Engl J Med. 2011;365:1375-1383.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 875]  [Cited by in F6Publishing: 387]  [Article Influence: 79.5]  [Reference Citation Analysis (0)]
19.  Sikkema M, de Jonge PJ, Steyerberg EW, Kuipers EJ. Risk of esophageal adenocarcinoma and mortality in patients with Barrett's esophagus: a systematic review and meta-analysis. Clin Gastroenterol Hepatol. 2010;8:235-244.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 248]  [Cited by in F6Publishing: 216]  [Article Influence: 19.1]  [Reference Citation Analysis (0)]
20.  Fleischmann C, Messmann H. Endoscopic treatment of early esophageal squamous neoplasia. Minerva Chir. 2018;73:378-384.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 1]  [Reference Citation Analysis (0)]
21.  Sami SS, Iyer PG. Recent Advances in Screening for Barrett's Esophagus. Curr Treat Options Gastroenterol. 2018;16:1-14.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 11]  [Cited by in F6Publishing: 12]  [Article Influence: 2.8]  [Reference Citation Analysis (0)]
22.  Liu J, Li M, Li Z, Zuo XL, Li CQ, Dong YY, Zhou CJ, Li YQ. Learning curve and interobserver agreement of confocal laser endomicroscopy for detecting precancerous or early-stage esophageal squamous cancer. PLoS One. 2014;9:e99089.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 14]  [Cited by in F6Publishing: 13]  [Article Influence: 1.8]  [Reference Citation Analysis (0)]
23.  van der Sommen F, Zinger S, Curvers WL, Bisschops R, Pech O, Weusten BL, Bergman JJ, de With PH, Schoon EJ. Computer-aided detection of early neoplastic lesions in Barrett's esophagus. Endoscopy. 2016;48:617-624.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 72]  [Cited by in F6Publishing: 63]  [Article Influence: 12.0]  [Reference Citation Analysis (1)]
24.  Struyvenberg MR, de Groof AJ, van der Putten J, van der Sommen F, Baldaque-Silva F, Omae M, Pouw R, Bisschops R, Vieth M, Schoon EJ, Curvers WL, de With PH, Bergman JJ. A computer-assisted algorithm for narrow-band imaging-based tissue characterization in Barrett's esophagus. Gastrointest Endosc. 2021;93:89-98.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 6]  [Cited by in F6Publishing: 8]  [Article Influence: 3.0]  [Reference Citation Analysis (0)]
25.  de Groof AJ, Struyvenberg MR, van der Putten J, van der Sommen F, Fockens KN, Curvers WL, Zinger S, Pouw RE, Coron E, Baldaque-Silva F, Pech O, Weusten B, Meining A, Neuhaus H, Bisschops R, Dent J, Schoon EJ, de With PH, Bergman JJ. Deep-Learning System Detects Neoplasia in Patients With Barrett's Esophagus With Higher Accuracy Than Endoscopists in a Multistep Training and Validation Study With Benchmarking. Gastroenterology. 2020;158:915-929.e4.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 80]  [Cited by in F6Publishing: 64]  [Article Influence: 26.7]  [Reference Citation Analysis (0)]
26.  de Groof AJ, Struyvenberg MR, Fockens KN, van der Putten J, van der Sommen F, Boers TG, Zinger S, Bisschops R, de With PH, Pouw RE, Curvers WL, Schoon EJ, Bergman JJGHM. Deep learning algorithm detection of Barrett's neoplasia with high accuracy during live endoscopic procedures: a pilot study (with video). Gastrointest Endosc. 2020;91:1242-1250.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 29]  [Cited by in F6Publishing: 28]  [Article Influence: 14.5]  [Reference Citation Analysis (0)]
27.  Hong J, Park BY, Park H. Convolutional neural network classifier for distinguishing Barrett's esophagus and neoplasia endomicroscopy images. Annu Int Conf IEEE Eng Med Biol Soc. 2017;2017:2892-2895.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 12]  [Cited by in F6Publishing: 15]  [Article Influence: 3.0]  [Reference Citation Analysis (0)]
28.  Hashimoto R, Requa J, Dao T, Ninh A, Tran E, Mai D, Lugo M, El-Hage Chehade N, Chang KJ, Karnes WE, Samarasena JB. Artificial intelligence using convolutional neural networks for real-time detection of early esophageal neoplasia in Barrett's esophagus (with video). Gastrointest Endosc. 2020;91:1264-1271.e1.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 46]  [Cited by in F6Publishing: 41]  [Article Influence: 23.0]  [Reference Citation Analysis (0)]
29.  de Groof J, van der Sommen F, van der Putten J, Struyvenberg MR, Zinger S, Curvers WL, Pech O, Meining A, Neuhaus H, Bisschops R, Schoon EJ, de With PH, Bergman JJ. The Argos project: The development of a computer-aided detection system to improve detection of Barrett's neoplasia on white light endoscopy. United European Gastroenterol J. 2019;7:538-547.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 46]  [Cited by in F6Publishing: 40]  [Article Influence: 15.3]  [Reference Citation Analysis (0)]
30.  Naveed M, Kubiliun N. Endoscopic Treatment of Early-Stage Esophageal Cancer. Curr Oncol Rep. 2018;20:71.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 10]  [Cited by in F6Publishing: 11]  [Article Influence: 2.5]  [Reference Citation Analysis (0)]
31.  Yang H, Hu B. Recent advances in early esophageal cancer: diagnosis and treatment based on endoscopy. Postgrad Med. 2021;133:665-673.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 3]  [Cited by in F6Publishing: 3]  [Article Influence: 3.0]  [Reference Citation Analysis (0)]
32.  Shimizu Y, Omori T, Yokoyama A, Yoshida T, Hirota J, Ono Y, Yamamoto J, Kato M, Asaka M. Endoscopic diagnosis of early squamous neoplasia of the esophagus with iodine staining: high-grade intra-epithelial neoplasia turns pink within a few minutes. J Gastroenterol Hepatol. 2008;23:546-550.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 80]  [Cited by in F6Publishing: 71]  [Article Influence: 5.3]  [Reference Citation Analysis (0)]
33.  Kolb JM, Wani S. Barrett's esophagus: current standards in advanced imaging. Transl Gastroenterol Hepatol. 2021;6:14.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2]  [Cited by in F6Publishing: 3]  [Article Influence: 2.0]  [Reference Citation Analysis (0)]
34.  Minami H, Isomoto H, Inoue H, Akazawa Y, Yamaguchi N, Ohnita K, Takeshima F, Hayashi T, Nakayama T, Nakao K. Significance of background coloration in endoscopic detection of early esophageal squamous cell carcinoma. Digestion. 2014;89:6-11.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 13]  [Cited by in F6Publishing: 15]  [Article Influence: 1.6]  [Reference Citation Analysis (0)]
35.  Nagami Y, Tominaga K, Machida H, Nakatani M, Kameda N, Sugimori S, Okazaki H, Tanigawa T, Yamagami H, Kubo N, Shiba M, Watanabe K, Watanabe T, Iguchi H, Fujiwara Y, Ohira M, Hirakawa K, Arakawa T. Usefulness of non-magnifying narrow-band imaging in screening of early esophageal squamous cell carcinoma: a prospective comparative study using propensity score matching. Am J Gastroenterol. 2014;109:845-854.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 67]  [Cited by in F6Publishing: 66]  [Article Influence: 8.4]  [Reference Citation Analysis (0)]
36.  Ishihara R, Takeuchi Y, Chatani R, Kidu T, Inoue T, Hanaoka N, Yamamoto S, Higashino K, Uedo N, Iishi H, Tatsuta M, Tomita Y, Ishiguro S. Prospective evaluation of narrow-band imaging endoscopy for screening of esophageal squamous mucosal high-grade neoplasia in experienced and less experienced endoscopists. Dis Esophagus. 2010;23:480-486.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 65]  [Cited by in F6Publishing: 64]  [Article Influence: 5.4]  [Reference Citation Analysis (0)]
37.  Ebigbo A, Mendel R, Probst A, Manzeneder J, Souza LA Jr, Papa JP, Palm C, Messmann H. Computer-aided diagnosis using deep learning in the evaluation of early oesophageal adenocarcinoma. Gut. 2019;68:1143-1145.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 52]  [Cited by in F6Publishing: 45]  [Article Influence: 13.0]  [Reference Citation Analysis (0)]
38.  Ebigbo A, Mendel R, Probst A, Manzeneder J, Prinz F, de Souza LA Jr, Papa J, Palm C, Messmann H. Real-time use of artificial intelligence in the evaluation of cancer in Barrett's oesophagus. Gut. 2020;69:615-616.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 43]  [Cited by in F6Publishing: 40]  [Article Influence: 14.3]  [Reference Citation Analysis (0)]
39.  Horie Y, Yoshio T, Aoyama K, Yoshimizu S, Horiuchi Y, Ishiyama A, Hirasawa T, Tsuchida T, Ozawa T, Ishihara S, Kumagai Y, Fujishiro M, Maetani I, Fujisaki J, Tada T. Diagnostic outcomes of esophageal cancer by artificial intelligence using convolutional neural networks. Gastrointest Endosc. 2019;89:25-32.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 146]  [Cited by in F6Publishing: 116]  [Article Influence: 36.5]  [Reference Citation Analysis (0)]
40.  Cai SL, Li B, Tan WM, Niu XJ, Yu HH, Yao LQ, Zhou PH, Yan B, Zhong YS. Using a deep learning system in endoscopy for screening of early esophageal squamous cell carcinoma (with video). Gastrointest Endosc. 2019;90:745-753.e2.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 39]  [Cited by in F6Publishing: 40]  [Article Influence: 13.0]  [Reference Citation Analysis (0)]
41.  Ohmori M, Ishihara R, Aoyama K, Nakagawa K, Iwagami H, Matsuura N, Shichijo S, Yamamoto K, Nagaike K, Nakahara M, Inoue T, Aoi K, Okada H, Tada T. Endoscopic detection and differentiation of esophageal lesions using a deep neural network. Gastrointest Endosc. 2020;91:301-309.e1.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 38]  [Cited by in F6Publishing: 37]  [Article Influence: 12.7]  [Reference Citation Analysis (0)]
42.  Liu G, Hua J, Wu Z, Meng T, Sun M, Huang P, He X, Sun W, Li X, Chen Y. Automatic classification of esophageal lesions in endoscopic images using a convolutional neural network. Ann Transl Med. 2020;8:486.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 5]  [Cited by in F6Publishing: 6]  [Article Influence: 2.5]  [Reference Citation Analysis (0)]
43.  Kumagai Y, Takubo K, Kawada K, Aoyama K, Endo Y, Ozawa T, Hirasawa T, Yoshio T, Ishihara S, Fujishiro M, Tamaru JI, Mochiki E, Ishida H, Tada T. Diagnosis using deep-learning artificial intelligence based on the endocytoscopic observation of the esophagus. Esophagus. 2019;16:180-187.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 38]  [Cited by in F6Publishing: 31]  [Article Influence: 9.5]  [Reference Citation Analysis (0)]
44.  Guo L, Xiao X, Wu C, Zeng X, Zhang Y, Du J, Bai S, Xie J, Zhang Z, Li Y, Wang X, Cheung O, Sharma M, Liu J, Hu B. Real-time automated diagnosis of precancerous lesions and early esophageal squamous cell carcinoma using a deep learning model (with videos). Gastrointest Endosc. 2020;91:41-51.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 74]  [Cited by in F6Publishing: 61]  [Article Influence: 24.7]  [Reference Citation Analysis (0)]
45.  Kuwano H, Nishimura Y, Oyama T, Kato H, Kitagawa Y, Kusano M, Shimada H, Takiuchi H, Toh Y, Doki Y, Naomoto Y, Matsubara H, Miyazaki T, Muto M, Yanagisawa A. Guidelines for Diagnosis and Treatment of Carcinoma of the Esophagus April 2012 edited by the Japan Esophageal Society. Esophagus. 2015;12:1-30.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 239]  [Cited by in F6Publishing: 220]  [Article Influence: 29.9]  [Reference Citation Analysis (0)]
46.  Tokai Y, Yoshio T, Aoyama K, Horie Y, Yoshimizu S, Horiuchi Y, Ishiyama A, Tsuchida T, Hirasawa T, Sakakibara Y, Yamada T, Yamaguchi S, Fujisaki J, Tada T. Application of artificial intelligence using convolutional neural networks in determining the invasion depth of esophageal squamous cell carcinoma. Esophagus. 2020;17:250-256.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 31]  [Cited by in F6Publishing: 30]  [Article Influence: 15.5]  [Reference Citation Analysis (0)]
47.  Nakagawa K, Ishihara R, Aoyama K, Ohmori M, Nakahira H, Matsuura N, Shichijo S, Nishida T, Yamada T, Yamaguchi S, Ogiyama H, Egawa S, Kishida O, Tada T. Classification for invasion depth of esophageal squamous cell carcinoma using a deep neural network compared with experienced endoscopists. Gastrointest Endosc. 2019;90:407-414.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 52]  [Cited by in F6Publishing: 40]  [Article Influence: 17.3]  [Reference Citation Analysis (0)]
48.  Zhao YY, Xue DX, Wang YL, Zhang R, Sun B, Cai YP, Feng H, Cai Y, Xu JM. Computer-assisted diagnosis of early esophageal squamous cell carcinoma using narrow-band imaging magnifying endoscopy. Endoscopy. 2019;51:333-341.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 38]  [Cited by in F6Publishing: 35]  [Article Influence: 9.5]  [Reference Citation Analysis (0)]
49.  Everson M, Herrera L, Li W, Luengo IM, Ahmad O, Banks M, Magee C, Alzoubaidi D, Hsu HM, Graham D, Vercauteren T, Lovat L, Ourselin S, Kashin S, Wang HP, Wang WL, Haidry RJ. Artificial intelligence for the real-time classification of intrapapillary capillary loop patterns in the endoscopic diagnosis of early oesophageal squamous cell carcinoma: A proof-of-concept study. United European Gastroenterol J. 2019;7:297-306.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 33]  [Cited by in F6Publishing: 29]  [Article Influence: 11.0]  [Reference Citation Analysis (0)]
50.  Uema R, Hayashi Y, Tashiro T, Saiki H, Kato M, Amano T, Tani M, Yoshihara T, Inoue T, Kimura K, Iwatani S, Sakatani A, Yoshii S, Tsujii Y, Shinzaki S, Iijima H, Takehara T. Use of a convolutional neural network for classifying microvessels of superficial esophageal squamous cell carcinomas. J Gastroenterol Hepatol. 2021;36:2239-2246.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1]  [Cited by in F6Publishing: 2]  [Article Influence: 1.0]  [Reference Citation Analysis (0)]
51.  Fukuda H, Ishihara R, Kato Y, Matsunaga T, Nishida T, Yamada T, Ogiyama H, Horie M, Kinoshita K, Tada T. Comparison of performances of artificial intelligence vs expert endoscopists for real-time assisted diagnosis of esophageal squamous cell carcinoma (with video). Gastrointest Endosc. 2020;92:848-855.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 18]  [Cited by in F6Publishing: 21]  [Article Influence: 9.0]  [Reference Citation Analysis (0)]
52.  Shimamoto Y, Ishihara R, Kato Y, Shoji A, Inoue T, Matsueda K, Miyake M, Waki K, Kono M, Fukuda H, Matsuura N, Nagaike K, Aoi K, Yamamoto K, Nakahara M, Nishihara A, Tada T. Real-time assessment of video images for esophageal squamous cell carcinoma invasion depth using artificial intelligence. J Gastroenterol. 2020;55:1037-1045.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 7]  [Cited by in F6Publishing: 10]  [Article Influence: 3.5]  [Reference Citation Analysis (0)]
53.  Waki K, Ishihara R, Kato Y, Shoji A, Inoue T, Matsueda K, Miyake M, Shimamoto Y, Fukuda H, Matsuura N, Ono Y, Yao K, Hashimoto S, Terai S, Ohmori M, Tanaka K, Kato M, Shono T, Miyamoto H, Tanaka Y, Tada T. Usefulness of an artificial intelligence system for the detection of esophageal squamous cell carcinoma evaluated with videos simulating overlooking situation. Dig Endosc. 2021 epub ahead of print.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 1]  [Reference Citation Analysis (0)]
54.  Struyvenberg MR, de Groof AJ, Bergman JJ, van der Sommen F, de With PHN, Konda VJA, Curvers WL. Advanced Imaging and Sampling in Barrett's Esophagus: Artificial Intelligence to the Rescue? Gastrointest Endosc Clin N Am. 2021;31:91-103.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1]  [Cited by in F6Publishing: 2]  [Article Influence: 1.0]  [Reference Citation Analysis (0)]
55.  Mutasa S, Sun S, Ha R. Understanding artificial intelligence based radiology studies: What is overfitting? Clin Imaging. 2020;65:96-99.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 26]  [Cited by in F6Publishing: 18]  [Article Influence: 13.0]  [Reference Citation Analysis (0)]
56.  Lazăr DC, Avram MF, Faur AC, Goldiş A, Romoşan I, Tăban S, Cornianu M. The Impact of Artificial Intelligence in the Endoscopic Assessment of Premalignant and Malignant Esophageal Lesions: Present and Future. Medicina (Kaunas). 2020;56:364.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 4]  [Cited by in F6Publishing: 4]  [Article Influence: 2.0]  [Reference Citation Analysis (0)]
57.  Zhang YH, Guo LJ, Yuan XL, Hu B. Artificial intelligence-assisted esophageal cancer management: Now and future. World J Gastroenterol. 2020;26:5256-5271.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in CrossRef: 8]  [Cited by in F6Publishing: 9]  [Article Influence: 4.0]  [Reference Citation Analysis (1)]