Minireviews Open Access
Copyright ©The Author(s) 2021. Published by Baishideng Publishing Group Inc. All rights reserved.
Artif Intell Gastrointest Endosc. Jun 28, 2021; 2(3): 71-78
Published online Jun 28, 2021. doi: 10.37126/aige.v2.i3.71
Application of convolutional neural network in detecting and classifying gastric cancer
Xin-Yi Feng, Xi Xu, Yun Zhang, Ye-Min Xu, Qiang She, Bin Deng, Department of Gastroenterology, Affiliated Hospital of Yangzhou University, Yangzhou 225000, Jiangsu Province, China
ORCID number: Xin-Yi Feng (0000-0001-5960-1028); Xi Xu (0000-0003-1810-7220); Yun Zhang (0000-0002-2801-7547); Ye-Min Xu (0000-0002-9058-1966); Qiang She (0000-0001-8861-2674); Bin Deng (0000-0002-5590-3755).
Author contributions: Feng XY and Xu X contributed equally to this work; Feng XY and Xu X conceived and drafted the manuscript; Feng XY, Xu X, Zhang Y, and Xu YM collected the relevant information; She Q and Deng B revised the manuscript.
Supported by The Key Project for Social Development of Yangzhou, No. YZ2020069.
Conflict-of-interest statement: The authors report no conflicts of interest in this work.
Open-Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/Licenses/by-nc/4.0/
Corresponding author: Bin Deng, MD, Associate Professor, Chief Physician, Department of Gastroenterology, Affiliated Hospital of Yangzhou University, No. 368 Hanjiang Middle Road, Yangzhou 225000, Jiangsu Province, China. chinadbin@126.com
Received: April 27, 2021
Peer-review started: April 27, 2021
First decision: April 28, 2021
Revised: May 21, 2021
Accepted: June 7, 2021
Article in press: June 7, 2021
Published online: June 28, 2021

Abstract

Gastric cancer (GC) is the fifth most common cancer in the world, and at present, esophagogastroduodenoscopy is recognized as an acceptable method for the screening and monitoring of GC. Convolutional neural networks (CNNs) are a type of deep learning model and have been widely used for image analysis. This paper reviews the application and prospects of CNNs in detecting and classifying GC, aiming to introduce a computer-aided diagnosis system and to provide evidence for subsequent studies.

Key Words: Artificial intelligence, Convolutional neural network, Endoscopy, Gastric cancer, Deep learning

Core Tip: With the development of new algorithms and big data, great achievements in artificial intelligence (AI) based on deep learning have been made in diagnostic imaging, especially convolutional neural network (CNN). Esophagogastroduodenoscopy (EGD) is currently the most common method for screening and diagnosing gastric cancer (GC). When AI was combined with EGD, the diagnostic efficacy of GC could be improved. Therefore, we review the application and prospect of CNN in detecting and classifying GC, aiming to introduce a computer-aided diagnosis system and provide evidence for following studies.



INTRODUCTION

Gastric cancer (GC) is a globally prevalent cancer, and its incidence and mortality rank fifth and fourth, respectively, among cancers worldwide[1]. It is estimated that in 2020 there were over 1000000 new cases and 769000 deaths of GC globally. The lack of early detection and treatment contributes to the high mortality and poor outcomes of GC[2]. Esophagogastroduodenoscopy (EGD) is currently the most common method for screening and diagnosing GC. However, the efficacy of EGD varies significantly[3]. It has been reported that the false negative rate of EGD in detecting GC ranges from 4.6%-25.8%[4-6]. GC lesions are difficult to recognize due to the subtle changes in the gastric mucosa[7]. Additionally, the quality of EGD can be heavily influenced by the subjective determination of endoscopists[8]. Therefore, it is significant to develop an objective and reliable method to recognize possible early GC (EGC) lesions and blind spots.

With the development of new algorithms and big data, great achievements in artificial intelligence (AI) based on deep learning (DL) have been made for diagnostic imaging. Meanwhile, as one of the most representative network models in DL, convolutional neural network (CNN) contributes to enhancing the accuracy of image analysis. CCN is now being successfully applied in detecting the gastrointestinal tract[9-11]. CNNs have achieved tremendous successes and wide application in image recognition and classification[12,13]. Therefore, we applied CNN in endoscopic diagnosis, aiming to improve the diagnostic efficacy of EGC. In this review, we scrupulously elucidate the application and evolution of CNN in the detection and classification of GC.

CONVOLUTIONAL NEURAL NETWORK

With the development of neuroscience, researchers have attempted to build artificial neural networks to simulate the structure of the human brain by mathematically activating neuronal activity. DL has been the mainstream machine learning method in many applications. It is a type of representation learning method in which a complex neural network architecture automatically learns representative data by transforming the input information into multiple levels of abstractions[10]. Computer-aided diagnosis requires the extraction of extensive original image data and the application of a series of complex algorithms. DL has a strong modeling and reasoning ability that is superb in realizing computer output diagnosis.

CNNs are neural networks sharing connections between hidden units that feature a shortened computational time and translational invariance properties[14]. A typical CNN framework includes three main components: A convolutional layer, an activation function, and a pooling layer. The convolutional layer is composed of several small matrices. These matrices are convolved throughout the whole input image working as filters, and then a nonlinear transformation is applied in an element-wise fashion. Finally, the pooling layer aggregates contiguous values to one scalar. The common types of pooling in popular use are either average or max[15,16].

In the early 1990s, CNNs were used in many applications, such as object detection and face recognition. With the advances of technology, CNN was first applied to the analysis of medical images in 1993. Lo et al[17] reported the detection of lung nodules using a CNN in 1995. However, due to the limitation of computer language, CNNs have been underestimated in their value for a long time. In 2012, Krizhevsky et al[18] proposed a CNN with five convolutional layers and three fully connected layers (namely, AlexNet) and achieved breakthrough performances in the ImageNet Large Scale Visual Recognition Challenge. Since then, CNNs have been of great interest and widely applied. For example, CNNs have been applied to identify diabetic retinopathy from fundus photographs and distinguish benign proliferative breast lesions from malignant[19]. In 2020, Plaksin et al[20] estimated the possibility of diagnosing malignant pleural effusion from facies images of pleural exudates obtained by the method of wedge-shaped dehydration using CNNs.

Compared with the general neural network, CNN is superior in the adaptation of the image structure, extraction, and classification, and as a result it presents satisfactory work efficiency.

APPLICATION OF CNN IN GC
Automatic detection

At present, CNNs have been applied to detect GC, showing distinctive improvements. Hirasawa et al[10] created and trained a CNN-based diagnostic system containing 13584 endoscopic images. In this study, the constructed CNN was able to detect 92.2% of GC cases, including small intramucosal GC, through a quick analysis of an independent test set involving 2296 stomach images, which is extremely difficult even by experienced endoscopists. To achieve the real-time detection of EGD, Ishioka et al[21] tested their CNN system for identifying video images and achieved a high detection rate (94.1%). The detection rate in video images by CNN is similar to that of still images, demonstrating the great potential of CNN in the early detection of GC.

Magnifying endoscopy with narrow band imaging (M-NBI) has been used for the differential diagnosis of various focal, superficial gastric lesions. By observing the microvasculature and fine mucosal structure, M-NBI has a better accuracy in the diagnosis of early GC than ordinary white light endoscopy[22]. Li et al[23] developed a novel CNN-based system for analyzing gastric mucosal lesions observed by M-NBI. The test results showed that the sensitivity, specificity, and accuracy of the CNN system in diagnosing early GC were 91.18%, 90.64%, and 90.91%, respectively. Notably, the specificity and accuracy of CNN diagnostics are comparable to those of experts with more than 10 years of clinical experience.

Ikenoyama et al[24] compared the diagnostic ability of CNN and 67 endoscopists, and the results showed that CNN had a faster processing speed and 25% higher sensitivity than endoscopists [95% confidence interval (CI): 14.9-32.5]. The use of CNN can effectively urge endoscopists to re-examine and evaluate ambiguous lesions, which also helps reduce false negatives and false positives (Table 1).

Table 1 Detailed information on studies concerning automatic detection by convolutional neural network in gastric cancer.
Ref.
Endoscopic images
Training dataset
Test dataset
Resolution
Sensitivity %
Specificity %
Accuracy/AUC %
PPV %
NPV %
Hirasawa et al[10] (2018)WLI/NBI/chromoendoscopy images135842296300 × 30092.2NANA30.6NA
Ishioka et al[21] (2019)Video imagesNA68NA94.1NANANANA
Li et al[23] (2020)M-NBI images20000341512 × 51291.1890.6490.9190.6491.18
Ikenoyama et al[24](2021)WLI/NBI/chromoendoscopy images135842940300 × 30058.487.375.726.096.5
Histological classification

An excellent endoscopist not only detects mucosal lesions but also distinguishes benign and malignant features. Cho et al[25] trained three CNN models, namely, Inception-v4, Resnet-152, and Inception-Resnet-v2, to classify gastric lesions into five categories: Advanced GC, EGC, high-grade dysplasia, low-grade dysplasia, and non-neoplasm. Among these systems, the Inception-Resnet-v2 model showed the best performance; the weighted average accuracy reached 84.6%, and the mean area under the curve (AUC) of the model for differentiating GC and neoplasm was 0.877 and 0.927, respectively.

To date, pathological diagnosis is still the gold standard to assess the presence or absence of cancerous lesions, cancer types, and degree of malignancy. Nevertheless, the accuracy of diagnosis and workload alleviation of pathologists are still challenging, and advanced computer-aided technologies are expected to play a key role in assisting pathological diagnosis. By optically scanning histologic tissue slides and converting them into ultrahigh-resolution digital images called whole slide images (WSIs), digital pathology is available for further investigations[26]. With the rapid development of EGD, the combination of DL models such as CNN and digital pathology is expected to greatly reduce the increasing workload of pathologists.

Sharma et al[27] explored two computerized applications of CNNs in GC, cancer classification and necrosis detection, based on immunohistochemistry of human epidermal growth factor receptor 2 and hematoxylin-eosin staining of histopathological WSIs. The overall classification accuracies that they obtained were 0.6990 and 0.8144, respectively. However, their study is limited by a small sample size with only 11 WSIs involved.

Iizuka et al[28] collected a large dataset of 4128 WSIs of stomach samples to train CNN and a recurrent neural network, and the evaluation results of CNN showed that the AUC for detecting gastric adenocarcinoma and adenoma was up to 0.97 and 0.99, respectively. They proposed that DL models can be used as a component in an integrated workflow alongside slide scanning, thus determining the top priority of the most valuable case, enhancing the accuracy of diagnosis, and speeding up the work efficacy.

Song et al[29] established a multicenter massive WSI dataset and tested slides collected from different hospitals that were detected with the histopathological diagnosis system for GC detection using DL. The results showed that the AUCs of the AI assistance system developed at the Chinese PLA General Hospital, Peking Union Medical College Hospital, and Cancer Hospital, Chinese Academy of Medical Sciences, were 0.986, 0.990, and 0.996, respectively, confirming its consistent stable performance. Their model-building approach may also be applied to identify multiple cancers in different organ systems in the future (Table 2).

Table 2 Detailed information on studies concerning histological classification by convolutional neural network in gastric cancer.
Ref.
Training dataset
Test dataset
Resolution
Group
AUC %
Cho et al[25] (2019)42058121280 × 640Five-category classification84.6
Cancer vs non-cancer87.7
Neoplasm vs non-neoplasm92.7
Sharma et al[27] (2017)231000 for cancer classificationNA512 × 512Cancer classification69.9
47130 for necrosis detectionNecrosis detection81.4
Iizuka et al[28] (2020)3628500512 × 512Adenocarcinoma98
Adenoma93.6
Song et al[29] (2020)21233212 from PLAGH320 × 320Benign and malignant cases and tumour subtypes98.6
595 from PUMCH99.0
987 from CHCAMS99.6
Prediction of depth of tumor invasion

EGC is categorized as a lesion confined to the mucosa (T1A) or the submucosa (T1B). An accurate identification of the depth of tumor invasion is the basis for determining the therapeutic schedule[30]. Endoscopic mucosal changes, such as irregular surfaces and submucosal tumors (e.g., marginal elevation), have been suggested as predictors of the depth of tumor invasion[31].

Zhu et al[11] built a CNN computer-aided detection (CNN-CAD) system to determine the depth of tumor invasion, which is expected to avoid unnecessary gastrectomy. In this system, there was a development dataset of 790 images and a test dataset of 203 images. The final results showed that the AUC for the CNN-CAD system was 0.94 (95%CI: 0.90-0.97), and the overall accuracy was 89.16%, which was significantly higher than that determined by endoscopists (17.25%, 95%CI: 11.63-22.59). Yoon et al[32] proposed a novel loss function for developing an optimized EGC depth prediction model, called the lesion-based visual geometry group-16. Using this novel function, the depth prediction model is able to accurately activate the EGC regions during training and simultaneously measure classification and localization errors. After experimenting with a total of 11539 endoscopic images, including 896 images of T1A-EGC, 809 of T1B-EGC, and 9834 of non-EGC, the AUC of the EGC depth prediction model was 0.851. In this study, it was also demonstrated that histopathological differentiation significantly affects the diagnostic accuracy of AI for determining T staging.

Upper abdominal enhanced computed tomography (CT) is the main imaging examination for T staging of GC[33]. Zheng et al[34] retrospectively collected 3500 venous phase-enhanced CT images of the upper abdomen from 225 patients with advanced GC, aiming to predict the depth of GC invasion and extract different regions of interest. The dataset was then enhanced by cropping and flipping, and the Faster R-CNN detection model was trained using other data enhancement methods. They found that the AUC of the experimentally established CNN model was 0.93, and the recognition accuracies for T2, T3, and T4 GC were 90%, 93%, and 95%, respectively. The abovementioned findings may be helpful for radiologists to predict the progression and postoperative outcomes of advanced GC (Table 3).

Table 3 Detailed information on studies concerning prediction of depth of tumor invasion by convolutional neural network in gastric cancer.
Ref.
Dataset
Resolution
Sensitivity %
Specificity %
Accuracy/AUC %
PPV %
NPV %
Zhu et al[11] (2019)Development datasets: 5056; Validation datasets: 1264; Test dataset: 203299 × 29976.4795.5689.1689.6688.97
Yoon et al[32] (2019)11539 images were randomly organized into five different folds, and at each fold, the training: validation: testing dataset ratio was 3:1:1NA79.277.885.179.377.7
Zheng et al[34] (2020)Totally 5855, training:verification dataset ratio was 4:1512 × 557NANAT2 stage: 90; T3 stage: 93; T4 stage: 95NANA
CURRENT EXISTING PROBLEMS
Limitations of studies

Selection bias: In most studies, researchers tend to select clear, typical, high-quality endoscopic images for training and testing image sets[10,35]. Because low-quality images with air, postbiopsy bleeding, halation, blurs, defocusing, or mucus secretion have been excluded, the results of retrospective clinical tests are often superior to actual ones. Therefore, prospective studies that are less affected by biases should be thoroughly analyzed to improve the accuracy and specificity of clinical trials, thus ensuring the reliability of the results.

Single-center studies: Most of the testing images are obtained from a single-center institution using the same type of endoscope and endoscopic video system, which may result in potential biases. In future studies, images obtained from multicenter institutions using different types of endoscopic devices should be collected for analysis.

Lack of endoscopic video images: Still images are used for the training and test dataset in most studies, which may limit the extensive clinical application[36]. Using video images may improve the performance of the CNN and represent real-life scenarios[21].

Limitations of CNN

False positive and false negative results: The specificity and sensitivity of automatic detection are very important to determine the choice of therapeutic schedule. False positive and false negative results directly lead to improper treatment. For example, gastritis with pathological manifestations of redness, atrophy, and intestinal metaplasia is easily confused with EGC, which increases the false positive rate[10]. In addition, early-stage cancer lesions are often too small to be found, which increases the false negative rate. The main reason for false positive and false negative results may be attributed to the limited quantity and quality of learning samples. Therefore, it is necessary to collect a large number of high-quality endoscopic images for training algorithms, thus enhancing the detection accuracy.

Ethical and moral issues: AI will not completely replace doctors. Who should be responsible for the safety of patients if misdiagnosed? Patient consent should be obtained before using AI to determine who should be responsible for misdiagnosis or incorrect treatment that can possibly occur[37].

CONCLUSION

As a classical and widely used DL model, CNN has been widely used in the medical field, especially for EGD detection. In remote or crowded areas, CNNs can be used to assist early cancer screening to prevent misdiagnosis due to a lack of experience and professional knowledge of endoscopists. Additionally, CNN is a promising method to provide online professional training for improving the professional skills of young endoscopists. Most importantly, CNN helps endoscopists detect, classify, and even predict the invasion depth of EGC.

At present, most of studies are still in the early stages of system development. More powerful, efficient, and stable algorithms, and more prospective studies are urgently required in the future to make AI more sensitive, specific, and accurate in cancer detection and classification.

Footnotes

Manuscript source: Invited manuscript

Specialty type: Gastroenterology and hepatology

Country/Territory of origin: China

Peer-review report’s scientific quality classification

Grade A (Excellent): 0

Grade B (Very good): 0

Grade C (Good): C

Grade D (Fair): 0

Grade E (Poor): 0

P-Reviewer: Viswanath YK S-Editor: Gao CC L-Editor: Wang TQ P-Editor: Wang LYT

References
1.  Sung H, Ferlay J, Siegel RL, Laversanne M, Soerjomataram I, Jemal A, Bray F. Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries. CA Cancer J Clin. 2021;71:209-249.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 50630]  [Cited by in F6Publishing: 43963]  [Article Influence: 14654.3]  [Reference Citation Analysis (47)]
2.  Zong L, Abe M, Seto Y, Ji J. The challenge of screening for early gastric cancer in China. Lancet. 2016;388:2606.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 193]  [Cited by in F6Publishing: 234]  [Article Influence: 29.3]  [Reference Citation Analysis (0)]
3.  Rutter MD, Senore C, Bisschops R, Domagk D, Valori R, Kaminski MF, Spada C, Bretthauer M, Bennett C, Bellisario C, Minozzi S, Hassan C, Rees C, Dinis-Ribeiro M, Hucl T, Ponchon T, Aabakken L, Fockens P. The European Society of Gastrointestinal Endoscopy Quality Improvement Initiative: developing performance measures. United European Gastroenterol J. 2016;4:30-41.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 26]  [Cited by in F6Publishing: 24]  [Article Influence: 3.0]  [Reference Citation Analysis (0)]
4.  Hosokawa O, Hattori M, Douden K, Hayashi H, Ohta K, Kaizaki Y. Difference in accuracy between gastroscopy and colonoscopy for detection of cancer. Hepatogastroenterology. 2007;54:442-444.  [PubMed]  [DOI]  [Cited in This Article: ]
5.  Raftopoulos SC, Segarajasingam DS, Burke V, Ee HC, Yusoff IF. A cohort study of missed and new cancers after esophagogastroduodenoscopy. Am J Gastroenterol. 2010;105:1292-1297.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 62]  [Cited by in F6Publishing: 74]  [Article Influence: 5.3]  [Reference Citation Analysis (0)]
6.  Vradelis S, Maynard N, Warren BF, Keshav S, Travis SP. Quality control in upper gastrointestinal endoscopy: detection rates of gastric cancer in Oxford 2005-2008. Postgrad Med J. 2011;87:335-339.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 31]  [Cited by in F6Publishing: 33]  [Article Influence: 2.5]  [Reference Citation Analysis (0)]
7.  Pasechnikov V, Chukov S, Fedorov E, Kikuste I, Leja M. Gastric cancer: prevention, screening and early diagnosis. World J Gastroenterol. 2014;20:13842-13862.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in CrossRef: 247]  [Cited by in F6Publishing: 260]  [Article Influence: 26.0]  [Reference Citation Analysis (2)]
8.  Scaffidi MA, Grover SC, Carnahan H, Khan R, Amadio JM, Yu JJ, Dargavel C, Khanna N, Ling SC, Yong E, Nguyen GC, Walsh CM. Impact of experience on self-assessment accuracy of clinical colonoscopy competence. Gastrointest Endosc 2018; 87: 827-836. e2.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 16]  [Cited by in F6Publishing: 19]  [Article Influence: 3.2]  [Reference Citation Analysis (0)]
9.  LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436-444.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 36149]  [Cited by in F6Publishing: 17362]  [Article Influence: 1929.1]  [Reference Citation Analysis (0)]
10.  Hirasawa T, Aoyama K, Tanimoto T, Ishihara S, Shichijo S, Ozawa T, Ohnishi T, Fujishiro M, Matsuo K, Fujisaki J, Tada T. Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images. Gastric Cancer. 2018;21:653-660.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 389]  [Cited by in F6Publishing: 373]  [Article Influence: 62.2]  [Reference Citation Analysis (0)]
11.  Zhu Y, Wang QC, Xu MD, Zhang Z, Cheng J, Zhong YS, Zhang YQ, Chen WF, Yao LQ, Zhou PH, Li QL. Application of convolutional neural network in the diagnosis of the invasion depth of gastric cancer based on conventional endoscopy. Gastrointest Endosc 2019; 89: 806-815. e1.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 201]  [Cited by in F6Publishing: 195]  [Article Influence: 39.0]  [Reference Citation Analysis (0)]
12.  Brinker TJ, Hekler A, Enk AH, von Kalle C. Enhanced classifier training to improve precision of a convolutional neural network to identify images of skin lesions. PLoS One. 2019;14:e0218713.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 18]  [Cited by in F6Publishing: 18]  [Article Influence: 3.6]  [Reference Citation Analysis (0)]
13.  Huang Y, Xu J, Zhou Y, Tong T, Zhuang X;  Alzheimer’s Disease Neuroimaging Initiative (ADNI). Diagnosis of Alzheimer's Disease via Multi-Modality 3D Convolutional Neural Network. Front Neurosci. 2019;13:509.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 126]  [Cited by in F6Publishing: 82]  [Article Influence: 16.4]  [Reference Citation Analysis (0)]
14.  Ke Q, Li Y. Is Rotation a Nuisance in Shape Recognition? IEEE Confer Comp Vis Patt Rec. 2014;4146-4153.  [PubMed]  [DOI]  [Cited in This Article: ]
15.  Arevalo J, González FA, Ramos-Pollán R, Oliveira JL, Guevara Lopez MA. Representation learning for mammography mass lesion classification with convolutional neural networks. Comput Methods Programs Biomed. 2016;127:248-257.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 285]  [Cited by in F6Publishing: 161]  [Article Influence: 20.1]  [Reference Citation Analysis (0)]
16.  Song Q, Zhao L, Luo X, Dou X. Using Deep Learning for Classification of Lung Nodules on Computed Tomography Images. J Healthc Eng. 2017;2017:8314740.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 250]  [Cited by in F6Publishing: 128]  [Article Influence: 18.3]  [Reference Citation Analysis (0)]
17.  Lo SB, Lou SA, Lin JS, Freedman MT, Chien MV, Mun SK. Artificial convolution neural network techniques and applications for lung nodule detection. IEEE Trans Med Imaging. 1995;14:711-718.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 233]  [Cited by in F6Publishing: 120]  [Article Influence: 4.1]  [Reference Citation Analysis (0)]
18.  Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Commun ACM. 2012;60:84-90.  [PubMed]  [DOI]  [Cited in This Article: ]
19.  Shaban M, Ogur Z, Mahmoud A, Switala A, Shalaby A, Abu Khalifeh H, Ghazal M, Fraiwan L, Giridharan G, Sandhu H, El-Baz AS. A convolutional neural network for the screening and staging of diabetic retinopathy. PLoS One. 2020;15:e0233514.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 32]  [Cited by in F6Publishing: 20]  [Article Influence: 5.0]  [Reference Citation Analysis (0)]
20.  Plaksin SA, Farshatova LI, Veselov IV, Zamyatina EB. [Diagnosis of malignant pleural effusions using convolutional neural networks by the morphometric image analysis of facies of pleural exudate]. Khirurgiia (Mosk). 2020;42-48.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2]  [Cited by in F6Publishing: 2]  [Article Influence: 0.5]  [Reference Citation Analysis (0)]
21.  Ishioka M, Hirasawa T, Tada T. Detecting gastric cancer from video images using convolutional neural networks. Dig Endosc. 2019;31:e34-e35.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 55]  [Cited by in F6Publishing: 54]  [Article Influence: 10.8]  [Reference Citation Analysis (0)]
22.  Kaise M, Kato M, Urashima M, Arai Y, Kaneyama H, Kanzazawa Y, Yonezawa J, Yoshida Y, Yoshimura N, Yamasaki T, Goda K, Imazu H, Arakawa H, Mochizuki K, Tajiri H. Magnifying endoscopy combined with narrow-band imaging for differential diagnosis of superficial depressed gastric lesions. Endoscopy. 2009;41:310-315.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 99]  [Cited by in F6Publishing: 108]  [Article Influence: 7.2]  [Reference Citation Analysis (0)]
23.  Li L, Chen Y, Shen Z, Zhang X, Sang J, Ding Y, Yang X, Li J, Chen M, Jin C, Chen C, Yu C. Convolutional neural network for the diagnosis of early gastric cancer based on magnifying narrow band imaging. Gastric Cancer. 2020;23:126-132.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 105]  [Cited by in F6Publishing: 112]  [Article Influence: 28.0]  [Reference Citation Analysis (0)]
24.  Ikenoyama Y, Hirasawa T, Ishioka M, Namikawa K, Yoshimizu S, Horiuchi Y, Ishiyama A, Yoshio T, Tsuchida T, Takeuchi Y, Shichijo S, Katayama N, Fujisaki J, Tada T. Detecting early gastric cancer: Comparison between the diagnostic ability of convolutional neural networks and endoscopists. Dig Endosc. 2021;33:141-150.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 68]  [Cited by in F6Publishing: 79]  [Article Influence: 26.3]  [Reference Citation Analysis (0)]
25.  Cho BJ, Bang CS, Park SW, Yang YJ, Seo SI, Lim H, Shin WG, Hong JT, Yoo YT, Hong SH, Choi JH, Lee JJ, Baik GH. Automated classification of gastric neoplasms in endoscopic images using a convolutional neural network. Endoscopy. 2019;51:1121-1129.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 67]  [Cited by in F6Publishing: 82]  [Article Influence: 16.4]  [Reference Citation Analysis (1)]
26.  Jansen I, Lucas M, Savci-Heijink CD, Meijer SL, Marquering HA, de Bruin DM, Zondervan PJ. Histopathology: ditch the slides, because digital and 3D are on show. World J Urol. 2018;36:549-555.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 22]  [Cited by in F6Publishing: 18]  [Article Influence: 3.0]  [Reference Citation Analysis (0)]
27.  Sharma H, Zerbe N, Klempert I, Hellwich O, Hufnagl P. Deep convolutional neural networks for automatic classification of gastric carcinoma using whole slide images in digital histopathology. Comput Med Imaging Graph. 2017;61:2-13.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 176]  [Cited by in F6Publishing: 148]  [Article Influence: 21.1]  [Reference Citation Analysis (0)]
28.  Iizuka O, Kanavati F, Kato K, Rambeau M, Arihiro K, Tsuneki M. Deep Learning Models for Histopathological Classification of Gastric and Colonic Epithelial Tumours. Sci Rep. 2020;10:1504.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 132]  [Cited by in F6Publishing: 161]  [Article Influence: 40.3]  [Reference Citation Analysis (0)]
29.  Song Z, Zou S, Zhou W, Huang Y, Shao L, Yuan J, Gou X, Jin W, Wang Z, Chen X, Ding X, Liu J, Yu C, Ku C, Liu C, Sun Z, Xu G, Wang Y, Zhang X, Wang D, Wang S, Xu W, Davis RC, Shi H. Clinically applicable histopathological diagnosis system for gastric cancer detection using deep learning. Nat Commun. 2020;11:4294.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 152]  [Cited by in F6Publishing: 103]  [Article Influence: 25.8]  [Reference Citation Analysis (0)]
30.  Wang J, Yu JC, Kang WM, Ma ZQ. Treatment strategy for early gastric cancer. Surg Oncol. 2012;21:119-123.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 51]  [Cited by in F6Publishing: 67]  [Article Influence: 5.2]  [Reference Citation Analysis (0)]
31.  Tsujii Y, Kato M, Inoue T, Yoshii S, Nagai K, Fujinaga T, Maekawa A, Hayashi Y, Akasaka T, Shinzaki S, Watabe K, Nishida T, Iijima H, Tsujii M, Takehara T. Integrated diagnostic strategy for the invasion depth of early gastric cancer by conventional endoscopy and EUS. Gastrointest Endosc. 2015;82:452-459.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 54]  [Cited by in F6Publishing: 58]  [Article Influence: 6.4]  [Reference Citation Analysis (0)]
32.  Yoon HJ, Kim S, Kim JH, Keum JS, Oh SI, Jo J, Chun J, Youn YH, Park H, Kwon IG, Choi SH, Noh SH. A Lesion-Based Convolutional Neural Network Improves Endoscopic Detection and Depth Prediction of Early Gastric Cancer. J Clin Med. 2019;8.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 61]  [Cited by in F6Publishing: 73]  [Article Influence: 14.6]  [Reference Citation Analysis (0)]
33.  Ajani JA, D'Amico TA, Almhanna K, Bentrem DJ, Chao J, Das P, Denlinger CS, Fanta P, Farjah F, Fuchs CS, Gerdes H, Gibson M, Glasgow RE, Hayman JA, Hochwald S, Hofstetter WL, Ilson DH, Jaroszewski D, Johung KL, Keswani RN, Kleinberg LR, Korn WM, Leong S, Linn C, Lockhart AC, Ly QP, Mulcahy MF, Orringer MB, Perry KA, Poultsides GA, Scott WJ, Strong VE, Washington MK, Weksler B, Willett CG, Wright CD, Zelman D, McMillian N, Sundar H. Gastric Cancer, Version 3.2016, NCCN Clinical Practice Guidelines in Oncology. J Natl Compr Canc Netw. 2016;14:1286-1312.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 640]  [Cited by in F6Publishing: 632]  [Article Influence: 79.0]  [Reference Citation Analysis (0)]
34.  Zheng L, Zhang X, Hu J, Gao Y, Zhang M, Li S, Zhou X, Niu T, Lu Y, Wang D. Establishment and Applicability of a Diagnostic System for Advanced Gastric Cancer T Staging Based on a Faster Region-Based Convolutional Neural Network. Front Oncol. 2020;10:1238.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 9]  [Cited by in F6Publishing: 9]  [Article Influence: 2.3]  [Reference Citation Analysis (0)]
35.  Gotoda T, Uedo N, Yoshinaga S, Tanuma T, Morita Y, Doyama H, Aso A, Hirasawa T, Yano T, Uchita K, Ho SH, Hsieh PH. Basic principles and practice of gastric cancer screening using high-definition white-light gastroscopy: Eyes can only see what the brain knows. Dig Endosc. 2016;28 Suppl 1:2-15.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 28]  [Cited by in F6Publishing: 28]  [Article Influence: 3.5]  [Reference Citation Analysis (0)]
36.  England JR, Cheng PM. Artificial Intelligence for Medical Image Analysis: A Guide for Authors and Reviewers. AJR Am J Roentgenol. 2019;212:513-519.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 72]  [Cited by in F6Publishing: 81]  [Article Influence: 16.2]  [Reference Citation Analysis (0)]
37.  Jin P, Ji X, Kang W, Li Y, Liu H, Ma F, Ma S, Hu H, Li W, Tian Y. Artificial intelligence in gastric cancer: a systematic review. J Cancer Res Clin Oncol. 2020;146:2339-2350.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 34]  [Cited by in F6Publishing: 50]  [Article Influence: 12.5]  [Reference Citation Analysis (0)]