Opinion Review Open Access
Copyright ©The Author(s) 2021. Published by Baishideng Publishing Group Inc. All rights reserved.
Artif Intell Gastrointest Endosc. Jun 28, 2021; 2(3): 50-62
Published online Jun 28, 2021. doi: 10.37126/aige.v2.i3.50
Current situation and prospect of artificial intelligence application in endoscopic diagnosis of Helicobacter pylori infection
Yi-Fan Lu, Bin Lyu, Department of Gastroenterology, The First Affiliated Hospital of Zhejiang Chinese Medical University, Hangzhou 310006, Zhejiang Province, China
ORCID number: Yi-Fan Lu (0000-0002-3623-1332); Bin Lyu (0000-0002-6247-571X).
Author contributions: Lu YF contributed to bibliographic retrieval, data compilation, methodology, software, and manuscript drafting; Lyu B reviewed and proofread the manuscript; all authors contributed to manuscript editing; all authors have read and approved the final manuscript.
Supported by National Natural Science Foundation of China, No. 81770535 and No. 81970470.
Conflict-of-interest statement: The authors declare no conflict of interests for this article.
Open-Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/Licenses/by-nc/4.0/
Corresponding author: Bin Lyu, MD, Chief Doctor, Professor, Department of Gastroenterology, First Affiliated Hospital of Zhejiang Chinese Medical University, No. 54 Youdian Road, Shangcheng District, Hangzhou 310006, Zhejiang Province, China. lvbin@medmail.com.cn
Received: May 2, 2021
Peer-review started: May 2, 2021
First decision: May 19, 2021
Revised: June 1, 2021
Accepted: June 18, 2021
Article in press: June 18, 2021
Published online: June 28, 2021


With the appearance and prevalence of deep learning, artificial intelligence (AI) has been broadly studied and made great progress in various fields of medicine, including gastroenterology. Helicobacter pylori (H. pylori), closely associated with various digestive and extradigestive diseases, has a high infection rate worldwide. Endoscopic surveillance can evaluate H. pylori infection situations and predict the risk of gastric cancer, but there is no objective diagnostic criteria to eliminate the differences between operators. The computer-aided diagnosis system based on AI technology has demonstrated excellent performance for the diagnosis of H. pylori infection, which is superior to novice endoscopists and similar to skilled. Compared with the visual diagnosis of H. pylori infection by endoscopists, AI possesses voluminous advantages: High accuracy, high efficiency, high quality control, high objectivity, and high-effect teaching. This review summarizes the previous and recent studies on AI-assisted diagnosis of H. pylori infection, points out the limitations, and puts forward prospect for future research.

Key Words: Artificial intelligence, Helicobacter pylori, Endoscopy, Diagnosis, Deep learning, Machine learning

Core Tip: In recent years, artificial intelligence (AI) has been rapidly developed and applied in various fields of medicine, including gastroenterology. We witnessed the promising application of AI in endoscopic diagnosis of Helicobacter pylori infection. In this review, we summarize the advantages of AI, point out the limitations of current studies, and put forward the direction of future research.


Helicobacter pylori (H. pylori) is a Gram-negative bacterium that infects the human stomach and is closely associated with a variety of diseases, including chronic gastritis, peptic ulcer, gastric adenocarcinoma, mucosa-associated lymphoid tissue lymphoma, and other digestive diseases, as well as extradigestive diseases of the blood system, nervous system, cardiovascular system, skin, and ophthalmology[1,2]. The International Agency for Research on Cancer has categorized H. pylori as a group 1 carcinogen. A recent systematic review and meta-analysis pooling 410879 participants showed that the overall prevalence of H. pylori infection worldwide was 44.3% [95% confidence interval (CI): 40.9-47.7][3]. Therefore, accurate diagnosis of H. pylori infection is extremely important for the prevention and treatment of related diseases. Currently, various diagnostic methods are available for detecting H. pylori infections (non-invasive and invasive methods)[4], but endoscopic evaluation to determine the H. pylori infection status is an irreplaceable method, which can assist in the screening of early gastric cancer.

Artificial intelligence (AI) is a technology science that studies and develops the theory, method, technology, and application system that is used to simulate, extend, and expand human intelligence. With the emergence and development of deep learning (DL), the application of AI in medicine has also been enthusiastically explored and extensively studied[5-8]. Numerous research studies, using AI technology to identify or distinguish images in different medical fields including gastroenterology, radiology, neurology, orthopedics, pathology, and ophthalmology, have been published[9].

In this review, we focus on the application of AI in the field of endoscopic diagnosis of H. pylori infection and discuss future prospect.


Most patients with gastric cancer have or have had H. pylori infection[10,11]. A large number of studies have indicated that the eradication of H. pylori can effectively reduce the risk of gastric cancer[12-14]. However, the study conducted by Mabe et al[15] showed that people after H. pylori eradication still have a higher risk of developing gastric cancer than people who have not been infected with H. pylori. Therefore, even after H. pylori eradication, regular endoscopic and histological surveillance is strongly recommended[16,17]. In consequence, endoscopic assessment of H. pylori infection status (non-infection, past infection, and current infection) has become increasingly important.

The Kyoto classification of gastritis was proposed, which is used to assess the status of H. pylori infection and more accurately evaluate the risk of gastric cancer[18]. According to the characteristics of the gastric mucosa under endoscopy, the gastric mucosa can be divided into the following three situations: H. pylori-uninfected gastric mucosa, H. pylori-infected gastric mucosa, and H. pylori-past infected gastric mucosa[18,19]. It should be noted that the Kyoto classification score is the sum of scores for five endoscopic features (atrophy, intestinal metaplasia, enlarged folds, nodularity, and diffuse redness with or without regular arrangement of collecting venules) and ranges from 0 to 8. The scoring system demonstrated excellent ability to evaluate H. pylori infection and predict the risk of gastric cancer[20]. However, above endoscopic features do not have objective indicators, and there is the potential for interobserver or intraobserver variability in the optical diagnosis of H. pylori-infected mucosa[21]. In other words, for endoscopic diagnosis of H. pylori infection, the diagnostic consistency among endoscopists is not ideal. Moreover, professional endoscopists can determine H. pylori infection with punctilious visual inspection of the mucosa during endoscopic examination, but novices need a large amount of time to perform this task effectively.

The significance of endoscopic surveillance is not limited to determining whether H. pylori is infected, not, or past, but can make an overall evaluation of the stomach. First of all, the classical Kimura-Takemoto classification is still widely used today to help endoscopists classify the atrophic pattern of the stomach by observing the endoscopic atrophic border[22]. Second, most gastric cancers develop from H. pylori associated gastritis. This can occur via a multistep pathway of precancerous lesions — in particular, atrophic gastritis, intestinal metaplasia, and dysplasia/intraepithelial neoplasia[16]. We can use histological staging systems such as OLGA and OLGIM to make an assessment of gastric cancer risk by the severity and extent of atrophy and intestinal metaplasia[23-25]. Finally, when one detection method shows H. pylori negativity, but there are typical signs of H. pylori infection under endoscopy, another different method should be selected for confirmation in this case to avoid missed diagnosis.


Physicians and endoscopists may be confused about the precise concept of AI, machine learning (ML), and DL. AI is a macro concept with many branches (e.g., Planning and Scheduling, Expert Systems, Multi-Agent Systems, and Evolutionary Computation). In general, there are three approaches to AI: Symbolism (rule based, such as IBM Watson), connectionism (network and connection based, such as DL), and Bayesian (based on the Bayesian theorem)[26]. In AI, computers can imitate humans and display intelligence similar to that of humans.

ML is a subset of AI, which is a method to realize AI. ML is defined as a set of methods that automatically detect patterns in data, and then utilize the uncovered patterns to predict future data or enable decision making under uncertain conditions[27]. ML is approximately divided into supervised and unsupervised methods. Unsupervised learning occurs when the purpose is to identify groups within data according to commonalities, with no a priori knowledge of the number of groups or their significance. Supervised learning occurs when training data contain individuals represented as input–output pairs. Input comprises individual descriptors while output comprises outcomes of interest to be predicted — either a class for classification tasks or a numerical value for regression tasks. Then, the supervised ML algorithm learns predictive models that whereafter allow to map new inputs to outputs[28]. The most basic practice of ML [e.g., support vector machine (SVM), random forest, and Gaussian mixture models] is to use algorithms to parse data so as to learn from them, and then make decisions and predictions about events in the real world. Today's ML has made great achievements in computer vision and other fields; however, it has its limitations, requiring a certain amount of manual instruction in the process. The image recognition rate of ML is enough to realize commercialization, but it is still very low in certain fields, which is why image recognition skills are still not as good as human capabilities[29].

DL [e.g., artificial neural network, deep neural network (DNN), convolutional neural network (CNN), and recurrent neural network] is a process in which the computer collects, analyzes, and processes the required data quickly while performing certain tasks, without having to accept the formal data, which is a technique to achieve ML. DL has the characteristics of autonomous learning; once the training data set is provided, the program can extract the key features and quantities by using back-propagation algorithm and changing the internal parameters of each neural network layer, without human instructions[30]. Compared with the conventional hand-crafted algorithm, the recently developed DL algorithm can automatically extract and learn the discriminative features of images, and then classify these images[31]. DL has the potential to automatically detect lesions, classify lesions, prompt differential diagnosis, and write preliminary medical reports, which will be realized in the near future.

CNN is a DNN based on the principle that the visual cortex of the human brain processes and recognizes images, which is now the most popular network architecture for DL for images[29]. CNN uses the multiple network layers (consecutive convolutional layers followed by pooling layers) to extract the key features from an image and provide a final classification through the fully connected layers as the output[30]. Compared to other DL structures, CNN is a prevalent method for image recognition because of its excellent performance in both video and audio applications. For example, CNN performs best in image classification in large image repositories such as ImageNet[32]. Additionally, CNN is easier to train than other DL techniques and has the advantage of using fewer parameters.

In recent years, AI has flourished in the field of gastroenterology, with applications throughout the digestive tract, especially in image recognition and classification. van der Sommen et al[33] reported an automated computer algorithm for the detection of early neoplasia in Barrett's esophagus based on 100 images from 44 patients with Barrett's esophagus. At per-image level, the sensitivity and specificity of the algorithm were both 0.83, and at the patient level, 0.86 and 0.87, respectively. Everson et al[34] trained a CNN to classify intrapapillary capillary loops for the real time prediction of early squamous cell cancer of the esophagus, demonstrating strong diagnostic performance with a sensitivity of 93.7% and accuracy of 91.7%, which is comparable to an expert panel of endoscopists. Xu et al[35] established a deep CNN system to detect gastric precancerous conditions (including gastric atrophy and intestinal metaplasia) by image-enhanced endoscopy (IEE). In the internal test set, the multicenter external test set, and the prospective video test set, the diagnostic accuracy for gastric atrophy was 0.901, 0.864, and 0.878, and that of intestinal metaplasia was 0.908, 0.859, and 0.898, respectively. To assist endoscopists in distinguishing early gastric cancer, Kanesaka et al[36] studied a computer-aided diagnosis (CAD) system utilizing SVM technology to facilitate the use of magnifying narrow band imaging (NBI), which revealed an accuracy of 96.3%, sensitivity of 96.7%, and specificity of 95%. Since capsule endoscopic image viewing and diagnosis is an extremely time-consuming process, Park et al[37] developed an AI-assisted reading model based on the Inception-Resnet-V2 model to identify different types of lesions and evaluate the clinical significance of this model. The results showed that the model not only helped the operator to improve the lesion detection rates, but also reduced the reading time. Urban et al[38] constructed a deep CNN model, including 8641 images from 2000 patients, to locate and identify colorectal polyps, which revealed an area under the receiver operating characteristic curve of 0.991 and accuracy of 96.4%. Also, several studies have proved the feasibility and prospect of AI-assisted endoscopy in the diagnosis of H. pylori infection.


As early as 2004, Huang et al[39] independently developed a CAD model based on a refined feature selection with neural network (RFSNN) technique which is planned for predicting H. pylori-related gastric histological features. A total of 104 dyspeptic patients were enrolled in this study and all subjects were prospectively evaluated by endoscopy and gastric biopsy. The authors used endoscopic images and histological features of 30 patients (15 with and 15 without H. pylori infection) to train the RFSNN model, and then used image parameters of the remaining 74 patients to construct a predictive model of H. pylori infection. At the same time, six endoscopic physicians (three novices and three skilled seniors) were invited to predict the histological features of the gastric antrum from endoscopic images. The results showed that the sensitivity and specificity for detecting H. pylori infection were 85.4% and 90.9%, respectively, when the RFSNN model included images of the same patient's antrum, body, and cardia for analysis. Together, the accuracy of the six endoscopists in predicting H. pylori infection was 67.5%, 64.8%, 72.9%, 74.3%, 79.7%, and 81.1%, respectively (the first three were novices and the second three were skilled elderly). Obviously, the accuracy of RFSNN model in predicting H. pylori infection by the antrum images was 85.1% higher than that of endoscopists. Notably, the prediction system has a high sensitivity and specificity in the diagnosis of atrophy and intestinal metaplasia, which was also superior to that of endoscopists. This RFSNN system provides real-time and comprehensive information about the stomach during endoscopy and has the potential to overcome the shortcomings of the localized biopsy. For various reasons, white-light endoscopy was used throughout the study, instead of IEE, which is more conducive to the diagnosis of H. pylori infection. As an early study of AI in diagnosing H. pylori infection, this paper provides reference data and innovative ideas for subsequent studies.

In 2008, Huang et al[40] conducted a further study in the field of AI-assisted endoscopy in the diagnosis of H. pylori infection. They designed a CAD system combining SVM and sequential forward floating selection (SFFS) to diagnose gastric histology of H. pylori using the features of white-light endoscopic images. This study aimed to use SFFS to select the most suitable feature to describe the relationship between histology and a large number of candidate image features, and then use SVM for classification. A total of 236 dyspepsia patients were enrolled in this study, 130 of whom were defined as H. pylori-infected patients using histological examination as the gold standard. The results showed that the accuracy of diagnosing H. pylori infection was 87.8%, 87.6%, and 86.7%, respectively, when the SVM with SFFS system was used to analyze the images of the antrum, body, and cardia. Compared with SVM without SFFS, the SVM with SFFS system had a higher diagnostic accuracy in most cases. This indicates that it is of great significance to use SFFS for screening before the classification of image features, which not only improves the diagnostic accuracy by excluding features with low correlation, but also reduces the time of training and testing system. Furthermore, 1000 repeated tests were carried out on the classification results, which proved the experiment reliability. In addition, the authors compared the new diagnostic system with the previous system[39] that used a neural network with feature selection to detect H. pylori infection, and it was shown that the new system had a higher classification rate. It is a pity that both studies classified H. pylori infection status only as infected and uninfected, and the authors did not consider cases where the infection disappeared or was eradicated with drugs.

In 2017, Shichijo et al[41] developed two deep CNN systems, one based on 32208 unclassified images either positive or negative for H. pylori (as a development data set) and the other based on images classified according to eight anatomical locations (cardia, upper body, middle body, lesser curvature, angle, lower body, antrum, and pylorus). Then, the test data set included a total of 11481 images from 397 patients (72 H. pylori positive and 325 negative). Patients who tested positive on any of these assays (including blood or urine anti-H. pylori immunoglobulin (Ig) G levels, fecal antigen test, or urease breath test) were classified as H. pylori positive. To compare the diagnostic performance of the two CNNs, 23 endoscopists were invited to evaluate the test data sets, together. According to their experience, the endoscopists were divided into three groups: "Certified group," "relatively experienced group," and "beginner group". The test results showed that for the first CNN constructed with unclassified images, the area under the receiver operating curve (ROC) curve (AUC) was 0.89 at a cut off value of 0.43. The sensitivity, specificity, accuracy, and diagnostic time of the first CNN were 81.9%, 83.4%, 83.1% and 3.3 min, respectively. These values for the secondary CNN were 88.9%, 87.4%, 87.7%, and 3.2 min, respectively, and the AUC was 0.93 at a cutoff value of 0.34. Furthermore, these values for the overall endoscopists were 79.0%, 83.2%, 82.4%, and 230.1 min, respectively. After statistical analysis, there was no difference in sensitivity, specificity, or accuracy between the first CNN and the 23 endoscopists in the diagnosis of H. pylori infection. However, the secondary CNN which was constructed with categorized images according to the location of the stomach was found to have a significantly higher accuracy than the endoscopists (by 5.3%; 95%CI: 0.3-10.2). Besides, the board-certified group was found to have a significantly higher specificity (89.3% vs 76.3%, P < 0.001) and accuracy (88.6% vs 75.6%, P < 0.001) than the beginner group. Similarly, a significant difference was observed between the relatively experienced group and the beginner group. In brief, the diagnostic ability of the second CNN is almost as good as that of a skilled endoscopist. In terms of diagnosis time, CNN even completely surpassed the endoscopists. However, still images were adopted to construct CNN algorithm in this study, and whether real-time diagnosis could be realized based on dynamic images remains to be researched.

One weakness of this study was that it did not include the situation after the eradication of H. pylori. To address this issue, the authors soon conducted a new study to further elaborate on the role of AI in assessing H. pylori infection status. A deep CNN which was constructed by Shichijo et al[42] in 2019 was pre-trained and fine-tuned on a dataset of 98564 endoscopic images from 5236 patients (742 H. pylori-positive, 3649 H. pylori-negative, and 845 H. pylori-eradicated). As in the previous study, this AI-based diagnostic system was developed using classified images following eight regions of the stomach (cardia, upper body, middle body, lesser curvature, angle, lower body, antrum, and pylorus). An independent test data set including a total of 23699 images from 847 patients (70 H. pylori positive, 493 H. pylori-negative, and 284 H. pylori-eradicated) was prepared to evaluate the diagnostic accuracy of the constructed CNN. According to the statistical analysis, the proportions of accurate diagnoses were 80% (465/582) for negative, 84% (147/174) for eradicated, and 48% (44/91) for positive. The performance of this diagnostic system is comparable to that of skilled endoscopists who, in one study, diagnosed these statuses in 88.9%, 55.8%, and 62.1% of cases, respectively[43]. Subsequently, the authors assessed the diagnostic ability of CNN for distinguishing H. pylori positive from eradicated (excluding H. pylori negative patients). Among 70 positive patients, the CNN diagnosed correctly as positive in 46 (66%), while out of 284 eradicated patients, the CNN diagnosed correctly as eradicated in 243 (86%). Nevertheless, this study did not take into account the time after H. pylori eradication, but the histological features of atrophic gastritis may disappear a few years after eradication[44]. Then, endoscopic features also change possibly in the diagnosis.

In 2019, Zheng et al[45] designed a novel computer-aided decision support system combined with a CNN model (ResNet-50, a state-of-the-art CNN consisting of 50 Layers). This system was expected to be used to retrospectively evaluate H. pylori infection based on white-light images (WLI) of the stomach. Totally 1507 patients (11729 gastric images) including 847 with H. pylori infection as the derivation cohort were used to train the algorithm. The authors created three DL models: (1) Single gastric image for all gastric images; (2) Single gastric image by different gastric locations (fundus, corpus, angularis, and antrum); and (3) Mmultiple gastric images for the same patient. Afterwards, 452 patients (3755 images) including 310 with H. pylori infection as the validation cohort were used to evaluate the diagnostic accuracy CNN for the evaluation of H. pylori infection. The evaluation results showed that for a single gastric image, the AUC, sensitivity, specificity, and accuracy were 0.93, 81.4%, 90.1%, and 84.5%, respectively. When evaluating a single gastric image by different anatomical locations, the AUCs from high to low were 0.94 (corpus), 0.91 (angularis), 0.90 (antrum), and 0.82 (fundus). According to statistical analysis, the CNN model using a single corpus image had the highest AUC (P < 0.01) compared with the antrum or fundus. More importantly, when multiple stomach images per patient were applied to the CNN model, the AUC, sensitivity, specificity, and accuracy were as high as 0.97, 91.6%, 98.6% and 93.8%, respectively. Consequently, the CNN model using multiple gastric images had a higher AUC compared with a single gastric image (P < 0.001) or body gastric image (P < 0.001). When selecting endoscopic images to be included in this study, images of poor quality (i.e., blurred images, excessive mucus, food residue, bleeding, and/or insufficient air insufflation) were excluded, which however could not be avoided in the actual operation of endoscopy. Therefore, the CNN's ability to recognize low-quality images needs to be further exploited.

In 2020, Yoshii et al[19] established a prediction model based on an ML procedure to prospectively evaluate H. pylori infection status (non-infection, past infection, and current infection) and compared it with general assessment by seven well-experienced endoscopists using the Kyoto classification of gastritis. The study recruited a total of 498 subjects (315 non-infection, 104 past infection, and 79 current infection) and the gold standard for determining the H. pylori infection status was the history of eradication therapy and the presence of H. pylori IgG antibody. The results showed that the overall diagnostic accuracy rate of the seven endoscopists was 82.9%. The diagnostic accuracy of the prediction model without H. pylori eradication history was 88.6% and with eradication history was 93.4%. Obviously, the results improved in the model with eradication history. There was no significant difference in diagnostic accuracy between the predictive model and skilled endoscopists. One of the limitations of this study was that only one test method was used to evaluate current status of H. pylori infection. In addition, urea breath test or fecal antigen test would evaluate current situation of H. pylori infection more surpassingly than that of H. pylori IgG antibody levels, especially in patients with an H. pylori antibody titer of 3-10 U/mL.

All of the above studies used WLI to build the CAD systems based on AI technology. Besides, some reports have shown the potential of image-enhanced endoscopies (IEEs) in diagnoses of H. pylori infection, such as blue laser imaging (BLI), linked color imaging (LCI), and NBI[46-48]. In 2018, Nakashima et al[49] built an AI diagnostic system based on a deep CNN algorithm for prospective diagnosis of H. pylori infection. A total of 222 subjects (105 H. pylori-positive) were recruited and received esophagogastroduodenoscopy and a serum test for H. pylori IgG antibodies. A serum H. pylori IgG antibody titer ≥ 10 U/mL was considered positive for H. pylori infection, while a titer < 3.0 U/mL was considered negative. In addition, subjects with serum H. pylori IgG antibody titers between 3.0 and 9.9 U/mL were excluded. In this study, 162 subjects (1944 images) including 75 with H. pylori infection were enrolled as a training group for AI training. For the remaining 60 subjects (30 H. pylori-positive and 30 H. pylori-negative), one WLI, one BLI-bright, and one LCI image of the lesser curvature of the gastric body were collected as a test group to evaluate the diagnostic performance of AI. According to statistical analysis, the AUC, sensitivity, and specificity for WLI were 0.66, 66.7%, and 60.0%, respectively. These indicators were 0.96, 96.7%, and 86.7% for BLI-bright, and 0.95, 96.7%, and 83.3% for LCI, respectively. The AUCs obtained for BLI-bright and LCI were markedly larger than that for WLI (P < 0.01). Obviously, this new AI diagnostic system was efficiently adapted to those laser IEEs rather than WLI; hence, it demonstrated an excellent ability to diagnose H. pylori infection using the IEEs. It is a pity that patients with a history of H. pylori eradication therapy were not included in this study, because this AI system is only an elementary tool and cannot fully evaluate the complex features of the stomach.

In 2020, Yasuda et al[21] constructed an automatic diagnosis system based on the SVM algorithm for H. pylori infection using LCI images. The authors expected to use this system to retrospectively diagnose H. pylori infection and compared its accuracy with that of endoscopists. In this study, endoscopic images of 32 patients (128 images in total) were included as training data, and four images were collected from each patient from the lesser (angle-lower body and middle-upper body) and greater (angle-lower body and middle-upper body) curvature. The diagnosis of H. pylori infection was based on more than two different tests: A histological examination, a serum antibody test, a stool antigen test, and/or a 13C-urea breath test. Regarding H. pylori infection of the subjects, 14 cases were H. pylori positive and 18 were negative. The authors used 525 LCI images from 105 patients (42 H. pylori infected, 46 post-eradication, and 17 uninfected) collected from the lesser (angle-lower body and middle-upper body) and greater (angle-lower body and middle-upper body) curvature and the fornix to evaluate the diagnostic capabilities of the system. It was worth noting that for the H. pylori post-eradicated subjects, more than 1 year (average of 5.6 years) had passed since H. pylori was successfully eradicated after undergoing endoscopy. At the same time, three doctors with different experiences (A, an expert involved in the development of LCI; B, a gastroenterology specialist; and C, a senior resident) also evaluated the same LCI images. The results showed that the accuracy of the AI system, A, B, and C in the diagnosis of H. pylori infection was 87.6%, 90.5%, 89.5%, and 86.7%, respectively. Accuracy of the AI system was higher than that of the inexperienced doctor (doctor C), but there was no significant difference between the diagnosis of the doctors and the AI system (P > 0.05). According to the sub-analysis of the patients divided with respect to state of H. pylori infection, the accuracy of the AI system, doctors A, B ,and C in the diagnosis of H. pylori post-eradication were 82.6%, 87.0%, 89.1%, and 76.1%, respectively. According to the sub-analysis of AI diagnosis for each image of stomach area, accuracy of the lesser curvature of the middle-upper body (88.6%) was significantly higher than that of the fornix (69.5%) and the greater curvature of the middle-upper body (73.3%). However, due to the small number of samples included in this study, there may be a risk of large sampling error.


The above studies show to a great extent that the application of AI in endoscopic diagnosis of H. pylori infection is practical, feasible, and promising. The detailed information of these studies is shown in Table 1. Compared with the manual identification and diagnosis by endoscopists, the CAD system based on AI technology has many irreplaceable advantages: (1) High accuracy: According to the current studies, AI is better than novice endoscopists in the diagnosis of H. pylori infection in terms of sensitivity, specificity, and accuracy, and is almost comparable to skilled endoscopists; (2) High efficiency: Thanks to today's highly developed computers, AI can classify thousands of endoscopic images in minutes, which can take a great deal of time and energy on the part of endoscopists. At the same time, the efficient image recognition lays a foundation for the real-time diagnosis of H. pylori infection under endoscopy; (3) High quality control: Some studies have found that adenoma detection rate decreases gradually with the extension of the working hours of endoscopists. This also suggests that endoscopist fatigue may lead to a decrease in the effectiveness of screening colonoscopy[50,51]. However, the CAD system based on AI technology is not disturbed by external factors and provides excellent quality control; (4) High objectivity: As we all know, it is completely subjective for endoscopists to judge H. pylori infection by observing the features of the gastric mucosa under endoscopy. Although the decision-making power is still in the hands of endoscopists, AI assisted endoscopy can help to provide an objective second opinion as a reference[52]; and (5) High-effect teaching: AI is capable of undertaking the teaching work of skilled endoscopists, and provides novices with more accessible, convenient, and objective guidance.

Table 1 Characteristics of current studies about AI-assisted endoscopic diagnosis of Helicobacter pylori infection.
Type of AI
Type of endoscopy
Training set
Validation set
Sensitivity (%)
Specificity (%)
Accuracy (%)
Huang et al[39], 2004RFSNNWLI30 patients74 patientsNA85.490.9NA
Huang et al[40], 2008SVM with SFFSWLI236 patients236 patientsNA82.6 (antrum); 89.1 (body); 100 (cardia)94.0 (antrum); 85.8 (body); 72.0 (cardia)87.8 (antrum); 87.6 (body); 86.7 (cardia)
SVM without SFFSWLI236 patients236 patientsNA98.5 (antrum); 98.7 (body); 99.1 (cardia)70.8 (antrum); 71.5 (body); 70.3 (cardia)86.3 (antrum); 86.4 (body); 86.0 (cardia)
Shichijo et al[41], 2017CNN (first)WLI1750 patients, 32208 images397 patients, 11481 images0.8981.983.483.1
CNN (second, constructed according to anatomical locations)WLI1750 patients, 32208 images397 patients, 11481 images0.9388.987.487.7
Shichijo et al[42], 2019CNNWLI5236 patients, 98564 images847 patients, 23699 imagesNANANA48 (H. pylori-positive); 84 (H. pylori-eradicated); 80 (H. pylori-negative)
Zheng et al[45], 2019CNN (first, single image for all image)WLI1507 patients, 76146 images452 patients, 3755 images0.9381.490.184.5
CNN (second, single image by different locations)WLI1507 patients, 76146 images452 patients, 3755 images0.90 (antrum); 0.91 (angularis); 0.94 (corpus); 0.82 (fundus)76.1 (antrum); 78.8 (angularis); 81.6 (corpus); 72.4 (fundus)88.5 (antrum); 90.5 (angularis); 92.1 (corpus); 80.5 (fundus)80.3 (antrum); 82.8 (angularis); 85.6 (corpus); 75.3 (fundus)
CNN (third, multiple images per patient)WLI1507 patients, 76146 images452 patients, 3755 images0.9791.698.693.8
Yoshii et al[19], 2020ML (model without H. pylori eradication history)WLINA498 patientsNA91.6 (non-infection); 75.0 (past infection); 59.5 (current infection)88.6 (non-infection); 89.9 (past infection); 94.7 (current infection)88.6
ML (model with H. pylori eradication history)WLINA498 patientsNA94.0 (non-infection); 94.0 (past infection); 88.1 (current infection)93.4 (non-infection); 100.0 (past infection); 94.7 (current infection)93.4
Nakashima et al[49], 2018CNNWLI162 patients, 1944 images60 patients, 60 images0.6666.760.0NA
CNNBLI-bright162 patients, 1944 images60 patients, 60 images0.9696.786.7NA
CNNLCI162 patients, 1944 images60 patients, 60 images0.9596.783.3NA
Yasuda et al[21], 2020SVMLCI32 patients, 128 images105 patients, 525 imagesNA90.485.787.6%

However, the application of AI in endoscopic diagnosis of H. pylori infection is still in the preliminary research stage at present, which has many limitations to be overcome. It is promising to put this technology into real clinical practice, but much research and further refinement are needed before that can happen. First of all, all of the above studies are single-center studies and most of them only used images from a single endoscopic device. Different images at different endoscopy centers may not guarantee compatibility and extensibility of the CAD system developed by the researchers and limit the generalization of the results. Next, so far, most of the studies have adopted a retrospective method which could be subject to considerable selection bias. As it is, images of high quality or with distinct features of H. pylori infection may be preferred for inclusion in studies, which probably lead to exaggerated diagnostic performance of AI and overestimation of the accuracy.

In addition, researchers and endoscopists need to be aware of potential pitfalls and biases in AI research, such as overfitting, spectrum bias, data snooping bias, straw man bias, and P-hacking bias, which can be reduced or eliminated through rigorous research design and appropriate methods[53]. Overfitting occurs when the AI algorithm modulates itself too much on the training dataset and the developed prediction system does not generalize well to new datasets. The translation, rotation, scaling, and clipping of the original endoscopic images to enlarge datasets may be one of the causes of overfitting. Spectrum bias occurs when the training dataset does not adequately represent the range of patients who will be applied in clinical practice (target population)[54]. External validation using independent datasets for model development, collected in a way that minimizes the spectrum bias, is necessary to prove the real performance of an AI algorithm and is important in the verification of any diagnostic or predictive model[55,56]. It is a pity that there is no study that utilized external validation for the performance of an established AI system in this review. It is worth noting that AI has one unavoidable disadvantage that needs to be addressed: “Black box” nature (lack of interpretability), which means that AI technology cannot explain the decision-making processes. But precise interpretability, which can provide diagnostic evidence, assist reduce bias, and build social acceptance, is extremely important in clinical practice. Some methods, such as class activation map, can supplement the “black box” features, hoping to be applied to future research[57].

Besides, some studies only divided H. pylori infection status into infected and uninfected, without considering H. pylori post-eradication, which is not in line with the clinical reality. Some studies only used single diagnostic method as the gold standard to judge H. pylori infection, which will lead to a great loss of diagnostic accuracy. Some studies included a small quantity of subjects and images, which may cause large errors and affect the credibility of the conclusions. IEE has great potential to improve the diagnosis rate of H. pylori infection, but there are few studies on the construction of CAD system based on AI using IEE images. What's more, all of the studies in this review were conducted in Asia, and racial difference cannot be avoided.

Finally, before any new technology is introduced into medical practice, ethical problems cannot be avoided and need to be properly solved, including AI technology. AI is not perfect, making no perfect predictions. If a CAD system based on AI technology misdiagnoses or misses diagnoses, who will be held accountable — the endoscopist, medical institution, or manufacturer? What is the attitude of endoscopists towards the results of AI diagnosis? Question and reject the AI, learn from it, or accept the diagnosis indiscriminately? In the era of AI, how to build a harmonious doctor-patient relationship?

Anyway, in the future, we should expect a “perfect study”, a multicenter, large sample, generalized, and prospective study, which has strict inclusion/exclusion criteria, a suitable gold standard for diagnosis and external validation of third-party independent datasets, using high quality datasets to establish a high diagnostic accuracy, and the stability of the CAD system based on AI technology to judge the H. pylori infection status. More importantly, ethical principles and laws and regulations related to AI technology need to be improved to protect everyone's legitimate interests. However, it should be pointed out that AI will not completely replace physicians, but will increase diagnostic accuracy, improve diagnostic efficiency, and reduce the burden on physicians. Health care workers need to consider patients’ preferences, environment, and ethics before making decisions, which AI cannot replace[58].


The era of AI is coming, with both opportunities and challenges. AI is undoubtedly a greatly excellent assistant, which can help endoscopists to evaluate H. pylori infection status more quickly, accurately and easily under the endoscope. At the same time, there are some issues as well as ethical considerations that need to be addressed before AI is applied in clinical practice.


Manuscript source: Invited manuscript

Specialty type: Gastroenterology and hepatology

Country/Territory of origin: China

Peer-review report’s scientific quality classification

Grade A (Excellent): 0

Grade B (Very good): B

Grade C (Good): 0

Grade D (Fair): 0

Grade E (Poor): 0

P-Reviewer: Yasuda T S-Editor: Gao CC L-Editor: Wang TQ P-Editor: Wang LYT

1.  Fischbach W, Malfertheiner P. Helicobacter Pylori Infection. Dtsch Arztebl Int. 2018;115:429-436.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 41]  [Cited by in F6Publishing: 37]  [Article Influence: 6.2]  [Reference Citation Analysis (0)]
2.  Gravina AG, Zagari RM, De Musis C, Romano L, Loguercio C, Romano M. Helicobacter pylori and extragastric diseases: A review. World J Gastroenterol. 2018;24:3204-3221.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in CrossRef: 162]  [Cited by in F6Publishing: 147]  [Article Influence: 24.5]  [Reference Citation Analysis (5)]
3.  Zamani M, Ebrahimtabar F, Zamani V, Miller WH, Alizadeh-Navaei R, Shokri-Shirvani J, Derakhshan MH. Systematic review with meta-analysis: the worldwide prevalence of Helicobacter pylori infection. Aliment Pharmacol Ther. 2018;47:868-876.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 377]  [Cited by in F6Publishing: 401]  [Article Influence: 66.8]  [Reference Citation Analysis (1)]
4.  Makristathis A, Hirschl AM, Mégraud F, Bessède E. Review: Diagnosis of Helicobacter pylori infection. Helicobacter. 2019;24 Suppl 1:e12641.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 19]  [Cited by in F6Publishing: 19]  [Article Influence: 3.8]  [Reference Citation Analysis (0)]
5.  Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, Thrun S. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542:115-118.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 5683]  [Cited by in F6Publishing: 4670]  [Article Influence: 667.1]  [Reference Citation Analysis (0)]
6.  Yeung S, Downing NL, Fei-Fei L, Milstein A. Bedside Computer Vision - Moving Artificial Intelligence from Driver Assistance to Patient Safety. N Engl J Med. 2018;378:1271-1273.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 79]  [Cited by in F6Publishing: 51]  [Article Influence: 8.5]  [Reference Citation Analysis (0)]
7.  Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, Venugopalan S, Widner K, Madams T, Cuadros J, Kim R, Raman R, Nelson PC, Mega JL, Webster DR. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA. 2016;316:2402-2410.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 3669]  [Cited by in F6Publishing: 2960]  [Article Influence: 370.0]  [Reference Citation Analysis (0)]
8.  Byrne MF, Chapados N, Soudan F, Oertel C, Linares Pérez M, Kelly R, Iqbal N, Chandelier F, Rex DK. Real-time differentiation of adenomatous and hyperplastic diminutive colorectal polyps during analysis of unaltered videos of standard colonoscopy using a deep learning model. Gut. 2019;68:94-100.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 363]  [Cited by in F6Publishing: 364]  [Article Influence: 72.8]  [Reference Citation Analysis (0)]
9.  Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25:44-56.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2376]  [Cited by in F6Publishing: 2021]  [Article Influence: 404.2]  [Reference Citation Analysis (0)]
10.  Ono S, Kato M, Suzuki M, Ishigaki S, Takahashi M, Haneda M, Mabe K, Shimizu Y. Frequency of Helicobacter pylori -negative gastric cancer and gastric mucosal atrophy in a Japanese endoscopic submucosal dissection series including histological, endoscopic and serological atrophy. Digestion. 2012;86:59-65.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 69]  [Cited by in F6Publishing: 67]  [Article Influence: 6.1]  [Reference Citation Analysis (0)]
11.  Matsuo T, Ito M, Takata S, Tanaka S, Yoshihara M, Chayama K. Low prevalence of Helicobacter pylori-negative gastric cancer among Japanese. Helicobacter. 2011;16:415-419.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 160]  [Cited by in F6Publishing: 160]  [Article Influence: 12.3]  [Reference Citation Analysis (0)]
12.  Uemura N, Okamoto S, Yamamoto S, Matsumura N, Yamaguchi S, Yamakido M, Taniyama K, Sasaki N, Schlemper RJ. Helicobacter pylori infection and the development of gastric cancer. N Engl J Med. 2001;345:784-789.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 3126]  [Cited by in F6Publishing: 3021]  [Article Influence: 131.3]  [Reference Citation Analysis (0)]
13.  Kamada T, Hata J, Sugiu K, Kusunoki H, Ito M, Tanaka S, Inoue K, Kawamura Y, Chayama K, Haruma K. Clinical features of gastric cancer discovered after successful eradication of Helicobacter pylori: results from a 9-year prospective follow-up study in Japan. Aliment Pharmacol Ther. 2005;21:1121-1126.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 96]  [Cited by in F6Publishing: 97]  [Article Influence: 5.1]  [Reference Citation Analysis (0)]
14.  Fukase K, Kato M, Kikuchi S, Inoue K, Uemura N, Okamoto S, Terao S, Amagai K, Hayashi S, Asaka M;  Japan Gast Study Group. Effect of eradication of Helicobacter pylori on incidence of metachronous gastric carcinoma after endoscopic resection of early gastric cancer: an open-label, randomised controlled trial. Lancet. 2008;372:392-397.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 876]  [Cited by in F6Publishing: 850]  [Article Influence: 53.1]  [Reference Citation Analysis (0)]
15.  Mabe K, Takahashi M, Oizumi H, Tsukuma H, Shibata A, Fukase K, Matsuda T, Takeda H, Kawata S. Does Helicobacter pylori eradication therapy for peptic ulcer prevent gastric cancer? World J Gastroenterol. 2009;15:4290-4297.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in CrossRef: 41]  [Cited by in F6Publishing: 39]  [Article Influence: 2.6]  [Reference Citation Analysis (0)]
16.  Sugano K, Tack J, Kuipers EJ, Graham DY, El-Omar EM, Miura S, Haruma K, Asaka M, Uemura N, Malfertheiner P;  faculty members of Kyoto Global Consensus Conference. Kyoto global consensus report on Helicobacter pylori gastritis. Gut. 2015;64:1353-1367.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 899]  [Cited by in F6Publishing: 948]  [Article Influence: 105.3]  [Reference Citation Analysis (0)]
17.  Correa P. A human model of gastric carcinogenesis. Cancer Res. 1988;48:3554-3560.  [PubMed]  [DOI]  [Cited in This Article: ]
18.  Haruma K, Kato M, Inoue K, Murakami K, Kamada T.   Kyoto classification of gastritis. Tokyo: Nihon Medical Center, 2017.  [PubMed]  [DOI]  [Cited in This Article: ]
19.  Yoshii S, Mabe K, Watano K, Ohno M, Matsumoto M, Ono S, Kudo T, Nojima M, Kato M, Sakamoto N. Validity of endoscopic features for the diagnosis of Helicobacter pylori infection status based on the Kyoto classification of gastritis. Dig Endosc. 2020;32:74-83.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 50]  [Cited by in F6Publishing: 62]  [Article Influence: 15.5]  [Reference Citation Analysis (0)]
20.  Toyoshima O, Nishizawa T, Koike K. Endoscopic Kyoto classification of Helicobacter pylori infection and gastric cancer risk diagnosis. World J Gastroenterol. 2020;26:466-477.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in CrossRef: 58]  [Cited by in F6Publishing: 51]  [Article Influence: 12.8]  [Reference Citation Analysis (0)]
21.  Yasuda T, Hiroyasu T, Hiwa S, Okada Y, Hayashi S, Nakahata Y, Yasuda Y, Omatsu T, Obora A, Kojima T, Ichikawa H, Yagi N. Potential of automatic diagnosis system with linked color imaging for diagnosis of Helicobacter pylori infection. Dig Endosc. 2020;32:373-381.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 37]  [Cited by in F6Publishing: 39]  [Article Influence: 9.8]  [Reference Citation Analysis (0)]
22.  Kimura K, Takemoto T. An Endoscopic Recognition of the Atrophic Border and its Significance in Chronic Gastritis. Endoscopy. 1969;1:3.  [PubMed]  [DOI]  [Cited in This Article: ]
23.  Rugge M, Meggio A, Pennelli G, Piscioli F, Giacomelli L, De Pretis G, Graham DY. Gastritis staging in clinical practice: the OLGA staging system. Gut. 2007;56:631-636.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 290]  [Cited by in F6Publishing: 304]  [Article Influence: 17.9]  [Reference Citation Analysis (0)]
24.  Capelle LG, de Vries AC, Haringsma J, Ter Borg F, de Vries RA, Bruno MJ, van Dekken H, Meijer J, van Grieken NC, Kuipers EJ. The staging of gastritis with the OLGA system by using intestinal metaplasia as an accurate alternative for atrophic gastritis. Gastrointest Endosc. 2010;71:1150-1158.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 301]  [Cited by in F6Publishing: 320]  [Article Influence: 22.9]  [Reference Citation Analysis (0)]
25.  Rugge M, Correa P, Di Mario F, El-Omar E, Fiocca R, Geboes K, Genta RM, Graham DY, Hattori T, Malfertheiner P, Nakajima S, Sipponen P, Sung J, Weinstein W, Vieth M. OLGA staging for gastritis: a tutorial. Dig Liver Dis. 2008;40:650-658.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 210]  [Cited by in F6Publishing: 197]  [Article Influence: 12.3]  [Reference Citation Analysis (1)]
26.  Lee JG, Jun S, Cho YW, Lee H, Kim GB, Seo JB, Kim N. Deep Learning in Medical Imaging: General Overview. Korean J Radiol. 2017;18:570-584.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 555]  [Cited by in F6Publishing: 500]  [Article Influence: 71.4]  [Reference Citation Analysis (1)]
27.  Robert C. Machine Learning, a Probabilistic Perspective. Chance. 2014;27:62-63.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 161]  [Cited by in F6Publishing: 78]  [Article Influence: 7.8]  [Reference Citation Analysis (0)]
28.  Shalev-Shwartz S, Ben-David S.   Understanding machine learning: From theory to algorithms. Cambridge university press, 2014.  [PubMed]  [DOI]  [Cited in This Article: ]
29.  Min JK, Kwak MS, Cha JM. Overview of Deep Learning in Gastrointestinal Endoscopy. Gut Liver. 2019;13:388-393.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 74]  [Cited by in F6Publishing: 93]  [Article Influence: 23.3]  [Reference Citation Analysis (0)]
30.  Takiyama H, Ozawa T, Ishihara S, Fujishiro M, Shichijo S, Nomura S, Miura M, Tada T. Automatic anatomical classification of esophagogastroduodenoscopy images using deep convolutional neural networks. Sci Rep. 2018;8:7497.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 78]  [Cited by in F6Publishing: 74]  [Article Influence: 12.3]  [Reference Citation Analysis (0)]
31.  Mori Y, Kudo SE, Mohmed HEN, Misawa M, Ogata N, Itoh H, Oda M, Mori K. Artificial intelligence and upper gastrointestinal endoscopy: Current status and future perspective. Dig Endosc. 2019;31:378-388.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 84]  [Cited by in F6Publishing: 77]  [Article Influence: 15.4]  [Reference Citation Analysis (0)]
32.  Krizhevsky A, Sutskever I, Hinton G. ImageNet Classification with Deep Convolutional Neural Networks. ACM. 2017;60:84-90.  [PubMed]  [DOI]  [Cited in This Article: ]
33.  van der Sommen F, Zinger S, Curvers WL, Bisschops R, Pech O, Weusten BL, Bergman JJ, de With PH, Schoon EJ. Computer-aided detection of early neoplastic lesions in Barrett's esophagus. Endoscopy. 2016;48:617-624.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 111]  [Cited by in F6Publishing: 113]  [Article Influence: 14.1]  [Reference Citation Analysis (1)]
34.  Everson MA, Garcia-Peraza-Herrera L, Wang HP, Lee CT, Chung CS, Hsieh PH, Chen CC, Tseng CH, Hsu MH, Vercauteren T, Ourselin S, Kashin S, Bisschops R, Pech O, Lovat L, Wang WL, Haidry RJ. A clinically interpretable convolutional neural network for the real-time prediction of early squamous cell cancer of the esophagus: comparing diagnostic performance with a panel of expert European and Asian endoscopists. Gastrointest Endosc. 2021;.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 4]  [Cited by in F6Publishing: 4]  [Article Influence: 1.3]  [Reference Citation Analysis (0)]
35.  Xu M, Zhou W, Wu L, Zhang J, Wang J, Mu G, Huang X, Li Y, Yuan J, Zeng Z, Wang Y, Huang L, Liu J, Yu H. Artificial intelligence in diagnosis of gastric precancerous conditions by image-enhanced endoscopy: a multicenter, diagnostic study (with video). Gastrointest Endosc. 2021;.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 23]  [Cited by in F6Publishing: 34]  [Article Influence: 11.3]  [Reference Citation Analysis (0)]
36.  Kanesaka T, Lee TC, Uedo N, Lin KP, Chen HZ, Lee JY, Wang HP, Chang HT. Computer-aided diagnosis for identifying and delineating early gastric cancers in magnifying narrow-band imaging. Gastrointest Endosc. 2018;87:1339-1344.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 108]  [Cited by in F6Publishing: 108]  [Article Influence: 18.0]  [Reference Citation Analysis (0)]
37.  Park J, Hwang Y, Nam JH, Oh DJ, Kim KB, Song HJ, Kim SH, Kang SH, Jung MK, Jeong Lim Y. Artificial intelligence that determines the clinical significance of capsule endoscopy images can increase the efficiency of reading. PLoS One. 2020;15:e0241474.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 12]  [Cited by in F6Publishing: 12]  [Article Influence: 3.0]  [Reference Citation Analysis (0)]
38.  Urban G, Tripathi P, Alkayali T, Mittal M, Jalali F, Karnes W, Baldi P. Deep Learning Localizes and Identifies Polyps in Real Time With 96% Accuracy in Screening Colonoscopy. Gastroenterology 2018; 155: 1069-1078. e8.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 398]  [Cited by in F6Publishing: 381]  [Article Influence: 63.5]  [Reference Citation Analysis (1)]
39.  Huang CR, Sheu BS, Chung PC, Yang HB. Computerized diagnosis of Helicobacter pylori infection and associated gastric inflammation from endoscopic images by refined feature selection using a neural network. Endoscopy. 2004;36:601-608.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 44]  [Cited by in F6Publishing: 44]  [Article Influence: 2.2]  [Reference Citation Analysis (0)]
40.  Huang CR, Chung PC, Sheu BS, Kuo HJ, Popper M. Helicobacter pylori-related gastric histology classification using support-vector-machine-based feature selection. IEEE Trans Inf Technol Biomed. 2008;12:523-531.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 27]  [Cited by in F6Publishing: 27]  [Article Influence: 1.7]  [Reference Citation Analysis (0)]
41.  Shichijo S, Nomura S, Aoyama K, Nishikawa Y, Miura M, Shinagawa T, Takiyama H, Tanimoto T, Ishihara S, Matsuo K, Tada T. Application of Convolutional Neural Networks in the Diagnosis of Helicobacter pylori Infection Based on Endoscopic Images. EBioMedicine. 2017;25:106-111.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 157]  [Cited by in F6Publishing: 160]  [Article Influence: 22.9]  [Reference Citation Analysis (0)]
42.  Shichijo S, Endo Y, Aoyama K, Takeuchi Y, Ozawa T, Takiyama H, Matsuo K, Fujishiro M, Ishihara S, Ishihara R, Tada T. Application of convolutional neural networks for evaluating Helicobacter pylori infection status on the basis of endoscopic images. Scand J Gastroenterol. 2019;54:158-163.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 53]  [Cited by in F6Publishing: 56]  [Article Influence: 11.2]  [Reference Citation Analysis (0)]
43.  Watanabe K, Nagata N, Shimbo T, Nakashima R, Furuhata E, Sakurai T, Akazawa N, Yokoi C, Kobayakawa M, Akiyama J, Mizokami M, Uemura N. Accuracy of endoscopic diagnosis of Helicobacter pylori infection according to level of endoscopic experience and the effect of training. BMC Gastroenterol. 2013;13:128.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 52]  [Cited by in F6Publishing: 42]  [Article Influence: 3.8]  [Reference Citation Analysis (0)]
44.  Kodama M, Murakami K, Okimoto T, Sato R, Uchida M, Abe T, Shiota S, Nakagawa Y, Mizukami K, Fujioka T. Ten-year prospective follow-up of histological changes at five points on the gastric mucosa as recommended by the updated Sydney system after Helicobacter pylori eradication. J Gastroenterol. 2012;47:394-403.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 91]  [Cited by in F6Publishing: 94]  [Article Influence: 7.8]  [Reference Citation Analysis (0)]
45.  Zheng W, Zhang X, Kim JJ, Zhu X, Ye G, Ye B, Wang J, Luo S, Li J, Yu T, Liu J, Hu W, Si J. High Accuracy of Convolutional Neural Network for Evaluation of Helicobacter pylori Infection Based on Endoscopic Images: Preliminary Experience. Clin Transl Gastroenterol. 2019;10:e00109.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 64]  [Cited by in F6Publishing: 61]  [Article Influence: 12.2]  [Reference Citation Analysis (0)]
46.  Nishikawa Y, Ikeda Y, Murakami H, Hori SI, Hino K, Sasaki C, Nishikawa M. Classification of atrophic mucosal patterns on Blue LASER Imaging for endoscopic diagnosis of Helicobacter pylori-related gastritis: A retrospective, observational study. PLoS One. 2018;13:e0193197.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 15]  [Cited by in F6Publishing: 16]  [Article Influence: 2.7]  [Reference Citation Analysis (0)]
47.  Takeda T, Asaoka D, Nojiri S, Nishiyama M, Ikeda A, Yatagai N, Ishizuka K, Hiromoto T, Okubo S, Suzuki M, Nakajima A, Nakatsu Y, Komori H, Akazawa Y, Nakagawa Y, Izumi K, Matsumoto K, Ueyama H, Sasaki H, Shimada Y, Osada T, Hojo M, Kato M, Nagahara A. Linked Color Imaging and the Kyoto Classification of Gastritis: Evaluation of Visibility and Inter-Rater Reliability. Digestion. 2020;101:598-607.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 20]  [Cited by in F6Publishing: 28]  [Article Influence: 9.3]  [Reference Citation Analysis (0)]
48.  Okubo M, Tahara T, Shibata T, Nakamura M, Kamiya Y, Yoshioka D, Maeda Y, Yonemura J, Ishizuka T, Arisawa T, Hirata I. Usefulness of magnifying narrow-band imaging endoscopy in the Helicobacter pylori-related chronic gastritis. Digestion. 2011;83:161-166.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 15]  [Cited by in F6Publishing: 17]  [Article Influence: 1.3]  [Reference Citation Analysis (0)]
49.  Nakashima H, Kawahira H, Kawachi H, Sakaki N. Artificial intelligence diagnosis of Helicobacter pylori infection using blue laser imaging-bright and linked color imaging: a single-center prospective study. Ann Gastroenterol. 2018;31:462-468.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 32]  [Cited by in F6Publishing: 47]  [Article Influence: 7.8]  [Reference Citation Analysis (0)]
50.  Lee CK, Cha JM, Kim WJ. Endoscopist Fatigue May Contribute to a Decline in the Effectiveness of Screening Colonoscopy. J Clin Gastroenterol. 2015;49:e51-e56.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 18]  [Cited by in F6Publishing: 22]  [Article Influence: 2.4]  [Reference Citation Analysis (0)]
51.  Lee A, Iskander JM, Gupta N, Borg BB, Zuckerman G, Banerjee B, Gyawali CP. Queue position in the endoscopic schedule impacts effectiveness of colonoscopy. Am J Gastroenterol. 2011;106:1457-1465.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 44]  [Cited by in F6Publishing: 50]  [Article Influence: 3.8]  [Reference Citation Analysis (0)]
52.  Hoogenboom SA, Bagci U, Wallace MB. AI in gastroenterology. The current state of play and the potential. How will it affect our practice and when? Tech Gastrointest Endosc. 2019;150634.  [PubMed]  [DOI]  [Cited in This Article: ]
53.  England JR, Cheng PM. Artificial Intelligence for Medical Image Analysis: A Guide for Authors and Reviewers. AJR Am J Roentgenol. 2019;212:513-519.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 72]  [Cited by in F6Publishing: 81]  [Article Influence: 16.2]  [Reference Citation Analysis (0)]
54.  Park SH, Han K. Methodologic Guide for Evaluating Clinical Performance and Effect of Artificial Intelligence Technology for Medical Diagnosis and Prediction. Radiology. 2018;286:800-809.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 467]  [Cited by in F6Publishing: 420]  [Article Influence: 70.0]  [Reference Citation Analysis (0)]
55.  Steyerberg EW  Overfitting and optimism in prediction models. In: Steyerberg EW. Clinical Prediction Models. Cham: Springer, 2019: 95-112.  [PubMed]  [DOI]  [Cited in This Article: ]
56.  Yang YJ, Bang CS. Application of artificial intelligence in gastroenterology. World J Gastroenterol. 2019;25:1666-1683.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in CrossRef: 166]  [Cited by in F6Publishing: 135]  [Article Influence: 27.0]  [Reference Citation Analysis (4)]
57.  Philbrick KA, Yoshida K, Inoue D, Akkus Z, Kline TL, Weston AD, Korfiatis P, Takahashi N, Erickson BJ. What Does Deep Learning See? AJR Am J Roentgenol. 2018;211:1184-1193.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 42]  [Cited by in F6Publishing: 43]  [Article Influence: 7.2]  [Reference Citation Analysis (0)]
58.  Le Berre C, Sandborn WJ, Aridhi S, Devignes MD, Fournier L, Smaïl-Tabbone M, Danese S, Peyrin-Biroulet L. Application of Artificial Intelligence to Gastroenterology and Hepatology. Gastroenterology 2020; 158: 76-94. e2.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 230]  [Cited by in F6Publishing: 259]  [Article Influence: 64.8]  [Reference Citation Analysis (0)]