Review Open Access
Copyright ©The Author(s) 2021. Published by Baishideng Publishing Group Inc. All rights reserved.
Artif Intell Gastroenterol. Dec 28, 2021; 2(6): 141-156
Published online Dec 28, 2021. doi: 10.35712/aig.v2.i6.141
Artificial intelligence in pathological evaluation of gastrointestinal cancers
Anil Alpsoy, Aysen Yavuz, Gulsum Ozlem Elpek, Department of Pathology, Akdeniz University Medical School, Antalya 07070, Turkey
ORCID number: Anil Alpsoy (0000-0003-4978-7652); Aysen Yavuz (0000-0001-9991-5515); Gulsum Ozlem Elpek (0000-0002-1237-5454).
Author contributions: Alpsoy A and Yavuz A performed the data acquisition; Elpek GO designed the outline and coordinated the writing of the paper; all authors equally contributed to the writing of the paper and preparation of the tables.
Conflict-of-interest statement: There is no conflict of interest to disclose.
Open-Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: https://creativecommons.org/Licenses/by-nc/4.0/
Corresponding author: Gulsum Ozlem Elpek, MD, Professor, Pathology, Akdeniz University Medical School, Dumlupınar bulvarı, Antalya 07070, Turkey. elpek@akdeniz.edu.tr
Received: December 6, 2021
Peer-review started: December 6, 2021
First decision: December 13, 2021
Revised: December 19, 2021
Accepted: December 27, 2021
Article in press: December 27, 2021
Published online: December 28, 2021

Abstract

The integration of artificial intelligence (AI) has shown promising benefits in many fields of diagnostic histopathology, including for gastrointestinal cancers (GCs), such as tumor identification, classification, and prognosis prediction. In parallel, recent evidence suggests that AI may help reduce the workload in gastrointestinal pathology by automatically detecting tumor tissues and evaluating prognostic parameters. In addition, AI seems to be an attractive tool for biomarker/genetic alteration prediction in GC, as it can contain a massive amount of information from visual data that is complex and partially understandable by pathologists. From this point of view, it is suggested that advances in AI could lead to revolutionary changes in many fields of pathology. Unfortunately, these findings do not exclude the possibility that there are still many hurdles to overcome before AI applications can be safely and effectively applied in actual pathology practice. These include a broad spectrum of challenges from needs identification to cost-effectiveness. Therefore, unlike other disciplines of medicine, no histopathology-based AI application, including in GC, has ever been approved either by a regulatory authority or approved for public reimbursement. The purpose of this review is to present data related to the applications of AI in pathology practice in GC and present the challenges that need to be overcome for their implementation.

Key Words: Digital image analysis, Digital pathology, Colorectal cancer, Gastric cancer, Machine learning, Deep learning

Core Tip: Recently, based on improvements in efficient computational power and learning capacities, various artificial intelligence applications, such as image-based diagnosis and prognosis prediction, have emerged in many fields of pathology. This review comprehensively summarizes the current status of artificial intelligence applications in gastrointestinal cancers. The present data are promising for the use of artificial intelligence to diagnose tumors, evaluate prognostic parameters, and detect biomarker/genetic alterations. However, many challenges hinder the implication of artificial intelligence models in real pathological practice. Therefore, these challenges and suggested solutions are also briefly presented to improve the accuracy and relevance of artificial intelligence in pathological practice, including in gastrointestinal cancers.



INTRODUCTION

Pathology is a medical specialty that performs morphological evaluations of organs, tissues, and cells to provide a definitive diagnosis of diseases and contributes to treatment by determining the critical parameters in their course[1]. Although histopathological assessment under a light microscope is considered a cornerstone, especially in oncology, the search for more objective criteria to overwhelm the subjectivity related to interobserver and intraobserver variations and to diminish the increased workload and time consumption has led to the development of image analysis-based digital pathology (DP), which plays a crucial role in modern pathological practice[2,3].

Following the considerable advances of slide scanner technology that can quickly digitalize whole pathological slides at high resolution (whole-slide images, WSI), in 2017, the approval of the Philips IntelliSite whole-slide scanner (Philips Electronics, Amsterdam, Netherlands) by the Food and Drug Administration (FDA) in the United States allowed a comprehensive evolution in DP[4]. This digitization not only facilitated the application of telepathology and created a valuable resource for education but also yielded the analysis of a large spectrum of morphological parameters and biomarkers/genetic alterations[5-7]. In addition, such digital images are constituted from matrices of numbers that contain much more information that is not accessible to the human eye[8,9]. Indeed, it may be possible to extract predictive and prognostic biomarkers from such digitized slides by computer-based image analysis. These methods are particularly of direct interest to ''computational pathology'', a relatively new pathology field driven by artificial intelligence (AI) that is expected to transform and improve the diagnosis and staging of cancers[3,10]. As a result, pathological AI models have evolved from expert systems to traditional machine learning (ML) and, finally, deep learning (DL)[11]. While the traditional supervised ML allows the production of data output from previously labeled training sets that can be corrected by the users, labeling big data can be time-consuming and challenging[12]. In addition, the accuracy depends heavily on the quality of feature extraction. In contrast, unsupervised ML is a time-saving model because it provides automatic detection of patterns[13]. However, input data that are not labeled by users pose challenges during interpretation, leading to varying results.

On the other hand, DL extracts features directly from the raw data and utilizes multiple layers of hidden data for the output[14-16]. Compared to expert systems and handcrafted ML models, DL models are simpler to conduct, have higher precision, and are more cost-effective[9,17] (Table 1). Furthermore, a considerable increase in computational processing capacity and the development of algorithms, such as convolutional neural networks (CNNs), fully CNNs, recurrent neural networks (RNNs), and generative adversarial networks, have resulted in numerous investigations on the application of DL-based AI in pathological practice[7,18,19]. The strengths and weaknesses of typical ML methods are summarized in Table 1.

Table 1 Strengths and weaknesses of machine learning methods in development of artificial intelligence models for gastrointestinal pathology.
AI model
Advantages
Disadvantages
Traditional ML (supervised)Allows users to produce a data output from the previously labeled training setLabeling big data can be time-consuming and challenging
Users can reflect domain knowledge featuresAccuracy depends heavily on the quality of feature extraction
Traditional ML (unsupervised)Users do not label any data or supervise the modelInput data is unknown and not labeled by users
Can detect patterns automatically Users cannot get precise information regarding data sorting
Save timeChallenges during interpreting
CNNDetects the important information and features without labelingA large training data is required
High performance in image recognitionLack of interpretability (black boxes)
FCNProvides computational speedRequires large amounts of labeled data for training
Automatically eliminates the background noiseHigh labeling cost
RNNCan decide which information to remember from its past experienceHarder to train the model
A deep learning model for sequential dataHigh computational cost
MILDoes not require detailed annotationA large amount of training data is required
Can be applied to large data setsHigh computational cost
GANGenerates new realistic data resembling the original dataHarder to train the model

In addition, the use of AI in pathology has led to the emergence of many DL-based applications[20]. Proscia, DeepLens, PathAI, and Inspirata are DL-based applications for the detection, diagnosis, and prognosis of several cancer subtypes[21-25]. In addition, Inspirata and PAIGE.AI are spending substantial time and resources on creating large libraries of digital WSI for use in training AI algorithms[21,24]. Interestingly, the landscape of DP is, in parallel, also undergoing important innovation and rapid changes[10].

It is also notable that some institutions are digitizing their entire pathology workflow, suggesting the routine use of AI-based systems in many areas of pathology soon[26,27]. Indeed, many studies have suggested that the integration of AI provides benefits for diagnosing and subtyping tumors, detecting histopathological parameters related to prognosis, and even identifying biomarker/genetic alterations in many fields of pathology[28]. On the other hand, the existence of a broad spectrum of difficulties, from AI-based pathology laboratory infrastructures to the robustness of algorithms, indicates that there are still many obstacles to be resolved before introducing AI applications in real-life pathology practice[29]. Nonetheless, AI-based approaches have the potential to contribute to pathological practice by improving workflows, eliminating simple errors, and increasing diagnostic reproducibility.

Regarding the gastrointestinal system, the accumulated data indicate that AI-based models might provide diagnostic assistance, prognosis prediction, and biomarker development for gastrointestinal cancer (GC). There have been few studies in the recent past that have addressed the effectiveness of AI models in GC[8,30]. However, effective implementation of these methods in real-life pathology practice requires further reviews comparing the results of previous studies and highlighting the challenges to be overcome.

This review presents recent data about the AI-based pathological evaluation of GC and current challenges for its implementation in gastrointestinal pathology practice with future directions to consider.

AI-BASED APPLICATIONS IN DIAGNOSIS OF GC

Recent studies on the use of AI models in the histopathological classification of gastric cancer are summarized in Table 2. Although the models used differ among studies, the results support that AI-based classification can be used in histopathological evaluations based on the accuracy and area under the curve (AUC) values determined. Different models are considered together in a few studies. For example, in a study where two DL-based methods were used to diagnose gastric cancer, the mean accuracy of both models was shown to be up to 89.7%[31]. In another study that compared the classification results of experienced pathologists with those of the ML-based program created by NEC Corporation, in gastric biopsy specimens, the agreement rate for biopsy specimens negative for neoplastic lesions was found to be as high as 90.6%[32]. More recently, Iizuka et al[33], who aimed to classify gastric biopsies as gastric adenocarcinoma, adenoma, or nonneoplastic mucosa by using AI algorithms based on CNNs and RNNs, revealed that the AUC for gastric adenocarcinoma classification was 0.9, supporting that AI-based models could be helpful in the diagnosis of gastric cancer. Although these results suggest that AI can be used to diagnose gastric cancer, it is difficult to relate these data to performance comparisons alone. In research, parameters such as the size of the dataset, resolution of detection, multisite validation, the number of categories to be classified, and most importantly, the presence of lesions other than malignancies that require diagnosis are also critical variables. In particular, the latter could be a potential limitation of AI-based models in actual practice. Indeed, a gastric biopsy is evaluated not only for malignancy but also for lesions such as gastritis and metaplasia. Therefore, an AI model used only for malignancy screening in gastric pathology will not reduce the pathologist's workload, as other findings also need to be reviewed.

Table 2 Artificial intelligence-based applications in gastric cancer.
Ref.
Task
No. of cases/data set
Method
Performance
Duraipandian et al[89]Classification700 slidesGastricNetAccuracy (100%)
Cosatto et al[72]> 12000 WSIsMILAUC (0.96)
Sharma et al[31]454 casesCNNAccuracy (69%)
Qu et al[90]9720 imagesDLAUCs (up to 0.97)
Yoshida et al[32]3062 gastric biopsiesMLOverall concordance rate (55.6%)
León et al[91]40 imagesCNN Accuracy (up to 89.7%)
Liang et al[92]1900 images DLAccuracy (91.1%)
Sun et al[93]500 images DL Accuracy (91.6%)
Tomita et al[94]502 images1Attention-based DLAccuracy (83%)
Wang et al[95]608 images Recalibrated multi-instance-DLAccuracy (86.5%)
Iizuka et al[33]1746 biopsy WSIs CNN, RNN Accuracy (95.6%), AUCs (up to 0.98)
Bollschweiler et al[41]Prognosis135 cases ANN Accuracy (93%)
Hensler et al[42]4302 casesQUEEN techniqueAccuracy (72.73%)
Jagric et al[43]213 casesLearning vector quantization NNSensitivity (71%), specificity (96.1%)
Lu et al[36]939 casesMMHGAccuracy (69.28%)
Jiang et al[37]786 cases SVM classifier AUCs (up to 0.83)
Liu et al[40]432 tissue samples SVM classifierAccuracy (up to 94.19%)
Korhani Kangi and Bahrampour[38]339 casesANN, BNNSensitivity (88.2% for ANN, 90.3% for BNN)Specificity (95.4% for ANN, 90.9% for BNN)
Zhang et al[39]669 casesMLAUCs (up to 0.831)
García et al[44]Tumor infiltrating lymphocytes3257 imagesCNNAccuracy (96.9%)
Kather et al[56]Genetic alterations1147 cases2Deep residual learning AUC (0.81 for gastric cancer)
Kather et al[47]> 1000 cases3NN AUC (up to 0.8)
Fu et al[57]> 1000 cases4NN Variable across tumors/gene alterations. Strongest relations in whole genome duplications

AI applications have also been developed to diagnose colorectal cancer (CRC), which may allow classification of lesions as normal, hyperplasia, adenoma, adenocarcinoma, and histological subtypes of polyps or adenocarcinomas (Table 3). In an elegant study, Korbar et al[34] observed that their AI models could classify five colorectal polyp types with a 93% accuracy. In another study, a created DL model was able to reclassify colorectal polyps in a manner comparable to those of the pathologist, even in datasets from other hospitals[35]. From this perspective, the results of most studies are encouraging for the use of AI models in the diagnosis of CRC. However, this does not exclude the fact that comparing the performance of those models reliably necessitates a common task using a standardized dataset with standardized annotations because each model is derived from different datasets with different explanations and is focused on different tasks in current studies.

Table 3 Artificial intelligence-based applications in colorectal cancer.
Ref.
Task
No. of cases/data set
Method
Performance
Xu et al[96]Classification717 patches (N, ADC subtypes)AlexNet Accuracy (97.5%)
Awan et al[97]454 cases (N, ADC grades LG vs HG)NN Accuracy (97%, for 2-class; 91%, for 3-class)
Haj-Hassan et al[98]30 multispectral image patches (N, AD, ADC)CNN Accuracy (99.2%)
Kainz et al[99]165 images (benign vs malignant)CNN (LeNet-5)Accuracy (95%-98%)
Korbar et al[34]697 cases (N, AD subtypes)ResNet Accuracy (93.0%)
Yoshida et al[100]1328 colorectal biopsy WSIsML Accuracy (90.1% for adenoma)
Wei et al[35]326 slides (training), 25 slides (validation) 157 slides (internal set)ResNet 157 slides: Accuracy 93.5% vs 91.4%(pathologists) 238 slides: Accuracy 87.0% vs 86.6%(pathologists)
Ponzio et al[101]27 WSIs (13500 patches) (N, AD, ADC)VGG16 Accuracy (96%)
Kather et al[47]94 WSIs1ResNet18AUC (> 0.99)
Yoon et al[102]57 WSIs (10280 patches) VGG Accuracy (93.5%)
Iizuka et al[33]4036 WSIs (N, AD, ADC)CNN/RNN AUCs (0.96, ADC; 0.99, AD)
Sena et al[103]393 WSIs (12565 patches) (N, HP, AD, ADC)CNN Accuracy (80%)
Bychkov et al[45]Prognosis420 cases RNNHR of 2.3, AUC (0.69)
Kather et al[46]1296 WSIs VGG19 Accuracy (94%-99%)
Kather et al[46]934 cases DL (comp. 5 networks)HR for overall survival of 1.63-1.99
Geessink et al[104]129 cases NN HR of 2.04 for disease free survival
Skrede et al [105]2022 casesNeural networks with MILHR 3.04
Kather et al[47]Genetic alterationsTCGA-DX (93408 patches)1TCGA-KR (60894 patches)ResNet18AUC (0.77), TCGA-DXAUC (0.84), TCGA KR)
Echle et al[55]8836 cases (MSI)ShuffleNet DLAUC (0.92-0.96 in two cohorts)
Kather et al[47]Tumor microenvironment analysis86 WSIs (100000)1VGG19 Accuracy (94%-99%)
Shapcott et al[48]853 patches and 142 TCGA imagesCNN with a grid-based attention networkAccuracy (65-84% in two sets)
Swiderska-Chadaj et al[49]28 WSIs FCN/LSM/U-Net Sensitivity (74.0%)
Alom et al[106]21135 patchesDCRN/R2U-NetAccuracy (91.9%)
Sirinukunwattana et al[107]Molecular subtypes1206 cases NN with domain-adversarial learningAUC (0.84-0.95 in the two validation sets)
Weis et al[50]Tumor budding401 casesCNN Correlation R (0.86)
AI-BASED APPLICATIONS FOR PROGNOSTICATION OF GC

Because gastric cancer has more complex and heterogeneous morphological features than CRC, most AI-based studies performed on these tumors focus on diagnosis rather than prognostication studies (Table 2). Nevertheless, there is some evidence showing that AI models can be helpful to evaluate histopathological parameters, such as differentiation and lymphovascular involvement, which are essential in determining the survival time[36-38], recurrence risk[39,40], metastasis[41-43], and, accordingly, treatment of gastric cancer. In the survival analysis, a higher predictive accuracy for overall survival and disease-free survival than the tumor-node-metastasis staging system defined by the American Joint Committee on Cancer by SVM application has been demonstrated[37]. In addition, this method can also be used to predict adjuvant chemotherapeutic benefits, which can facilitate individualized therapy. Another study combining the demographics, pathological indicators, and physiological characteristics of the study group found that a method using a new multimodal hypergraph learning framework to improve the accuracy of survival prediction outperformed random forests and SVM in survival prediction[36]. Furthermore, when the artificial neural network and Bayesian neural network (BNN) values were compared in survival estimation, it was shown that BNN was superior to the artificial neural network method[38].

The application of neural networks significantly improves the prediction of lymph node metastasis[41]. In addition, in a study to determine the microenvironment that can predict tumor behavior, García et al[44]observed that a CNN model could be used to detect tumor-infiltrating lymphocytes (accuracy, 96.9%). However, the number of these studies should be increased to draw a better conclusion about the application of AI-based DP in the prognostication of gastric cancer.

In CRC, DL was found to be effective in predicting prognosis at all stages. For example, in a study where RNN analyzed tissue microarrays to predict 5-year disease-specific survival, the hazard ratio and AUC were determined to be 2.3 and 0.69, respectively[45]. In another study, a 99% accuracy was observed in estimating the course of the disease using more than 1000 histological images collected from three institutions[46]. Finally, in comparing five separate DL networks using 934 cases, Kather et al[47] observed that the hazard ratio was 1.99 in determining overall survival. In studies investigating the microenvironment with AI-based models in these tumors, AUC values ranged from 0.91 to 0.99[47-49]. In another interesting study, Weis et al[50] pointed out that detecting tumor bud hot spots with CNN may influence determining tumor budding, which plays a role in determining tumor behavior. The characteristics of these studies are briefly presented in Table 3. Although this needs to be supported and standardized by further comparative studies, all these findings suggest that AI can be applied for determining the behavior of CRC.

AI-BASED APPLICATIONS FOR GENETIC AND MOLECULAR TESTING IN GC

In routine practice, evaluating surgical and biopsy specimens of GI cancers is essential for identifying molecular biomarkers that predict the response to targeted therapies. This evaluation requires the use of immunohistochemistry or advanced molecular techniques.

The detection of genetic alterations called microsatellite instability (MSI), especially in CRC, is very important for treatment with immunomodulators[51-53]. In addition, it is possible to determine the MSI-related phenotype and identify conditions that require family information and close follow-up of the patient, such as Lynch syndrome[54]. The revelation that some of the genetic events in these cancers are associated with certain morphological events has led to several attempts to use AI-based algorithms in WSIs. Furthermore, due to the large number of samples available, CRC was seen as a prototype for these studies. In this context, accumulated data indicate that AI-based models are influential in determining both MSI and other genotypic changes[47,55-57]. In particular, the DL algorithm developed by Echle et al[55] to detect MSI in CRC using more than 8800 images recently showed an AUC of 0.96 in the multi-institution validation cohort (Table 3).

There have been other attempts to develop models that directly predict gene mutations from the WSI of gastric cancer. In addition, it has been observed that AI could also predict gene expression and RNA-seq data, and these models have remarkable potential for clinical translation[47,56,57] (Table 2).

However, further additional and prospective validation studies are necessary for GI cancers before applying AI in real life to reduce the molecular testing workload and allow testing in health care centers with limited resources.

CHALLENGES AND IMPLEMENTATION OF AI-BASED APPLICATIONS IN REAL-LIFE PRACTICE

In general, the need for a close review of the steps involved in ethics, design, financing, development, validation and regulation, implementation, and impact on the workforce in the application of AI in pathology has been highlighted[58].

From this perspective, although AI-based models are likely to play a critical role in gastrointestinal pathology, including GC, in the future, several problems similar to those in other fields of pathology need to be addressed to ensure implementation. Brief information about the difficulties encountered in applying AI models in pathology, including GC, and suggested solutions are presented in Table 4.

Table 4 Summary of challenges and suggested solutions in development process of artificial intelligence applications.
Process
Challenges
Suggested solutions
Ethical considerationsLack of patient’s approval for commercial useApproval for both research and product development
Design of AI modelsUnderestimation of end-users’ needsCollaboration with skate holders
Optimization of data-setsCNN: Large amounts of imagesAugmentation techniques, transfer learning
Rare tumors: Limited number of imagesGlobal data sharing
Variations in preanalytical and analytical phasesAI algorithms to standardize staining, color properties, and WSIs quality
Annotation of data-setsInterobserver variations in diagnosisMIL algorithms
Discrepancies among performances for trained algorithms
Validation Presence of ground truth without objectivityMulticenter evaluations that include many pathologists and data-set
RegulationLack of current regulatory guidance specific for AI toolsNew guidelines and regulations for safer and effective AI tools
ImplementationChanges in work-flowSelection of AI applications that will speed up the work-flow
IT infrastructure investmentAugmented microscopy directed to the cloud network service
The relative inexperience of pathologistsTraining about AI, integration of AI in medical education
AI applications that lack interpretability ( Black-box) Constructions of interpretable models, generating attention heat map
Lack of external quality assuranceSheme for this purpose should be designed
Legal implicationsThe performance of AI algorithms should be assured for reporting
Ethical considerations

Although consent can be obtained from patients to use data for research purposes, a lack of approval for commercial use can cause problems in developing AI models[59]. Some researchers argue that this can be resolved by developing a framework for global data sharing by obtaining approvals that convey the possibility of commercial use for research and product development[30].

Design of AI models

The primary expectation of AI in pathology is to fill gaps and address unmet needs in the daily workflow. These needs mainly include workload-intensive and repetitive procedures, such as calculating tumor necrosis, mitotic count, and lymph node metastases, and diagnosing lesions prone to interobserver variabilities. The main goal to consider in developing AI applications in pathology is to solve a real clinical need. However, the development of models for AI application in this field of medicine involves a variety of stakeholders, including not just pathologists but computer scientists, IT, and pharmaceutical companies, which inevitably leads to different expectations and perspectives. For example, some may have academic publishing purposes, while others may be profitable commercial products. Therefore, an expected solution in pathology may not meet the expectations in finance, leading to the company not preferring to develop. To overcome these challenges and develop AI algorithms that are effectively used in DP, GC, pathologists, academic professionals who can develop technology, and companies that will promote the product must collaborate in harmony.

Development of AI models

Once AI models are designed and built, their development requires an accurate definition of the output, straightforward design of the algorithm, collection of a large follow-up sample or even pilot data, data disclosure and processing, and statistical analysis.

From this perspective, high-quality dataset optimization can be considered one of the biggest obstacles to the development of AI in DP. CNNs require a large number, even thousands, of pathological image datasets, to perform adequately[60]. Especially in rare tumors, the inability to obtain a very high number of images is quite limiting. To overcome this situation, the use of data augmentation techniques and learning methods is recommended. In contrast, Jones et al[61] indicated that small-scale datasets of < 100 digital slides might be sufficient in the case of transfer learning. Recently, it was proposed to develop publicly available datasets for global data sharing. However, it cannot be ruled out that very few such datasets are available in pathology, partially due to privacy, copyright, and financial issues[62]. Although The Cancer Genome Atlas provides many WSIs and associated molecular data, it does not contain enough cases for training AI applications for clinical practice[63,64]. Hartman et al[63] pointed out that another potential source of datasets could be public challenges provided for developing DL algorithms.

Again, developing high performance in AI applications in DP requires training on large datasets, which can be affected by the preanalytical (variations in fixation protocols and variations in the thickness of tissue sections) and analytical (variations in staining techniques and scanning protocols) phases applied to acquire digital images[65,66]. Indeed, converting a glass slide to WSI is not a simple task, and color modifications may influence the accuracy of AI. For this purpose, several AI algorithms have emerged to standardize data in recent years, including staining and color properties[67-69]. In addition, several automated algorithms have also been provided to standardize WSI quality, which automatically detects regions of optimum quality and removes out-of-focus or artifact-related regions, such as DeepFocus[70,71].

Annotation of the dataset

The curation of the dataset should be followed by annotation, which is another complex task. The limits of this annotation are broad, depending on AI, ranging from classification at the slide level to labeling at the pixel level[7,30]. For pathologists, the task of annotating many images is a time-consuming, sometimes challenging effort that can affect the accuracy of the models being trained, especially when the task is complex, especially if, as in gastrointestinal pathology, the disease selected for diagnosis differs significantly among observers (e.g., intramucosal carcinomas) and if the accuracy of dataset descriptions cannot be warranted[72]. Moreover, the trained algorithm may not produce the same performance in the dataset when used in other medical centers. Recently, many efforts have been made to solve the annotation problems that hinder the application of AI in pathology practice[67,73]. The data support that multi-instance learning (MIL) algorithms can be applied without detailed annotation. In particular, there is evidence that MIL can be effective when there is a large dataset and detailed annotations are impossible to obtain[60].

Validation and regulation

The preparation of the annotated dataset is followed by the model development process (preparation of the datasets for training, testing, and validation) and the selection of the learning method with the ML technique. In this context, the validation of AI-based technologies requires an evidence-based approach, and it is emphasized that analytical validation should also be considered in a laboratory-centered medical discipline, such as pathology[58,73]. Therefore, it is essential to establish steps and criteria for validating new tests according to the standards. For example, to validate the image analysis used to determine the expression of a biomarker, the technique can often be compared to a detailed manual tumor assessment. However, the performance of the AI technique compared with that of pathologists is not straightforward, given the intraobserver and interobserver variability. Today, there are difficulties associated with determining "ground truth" to AI applications. This situation leads to the need for repeated validation of the robustness and reproducibility of AI applications in large and variable patient groups[30].

There may be a relative lack of validation cohorts in the development of AI-based applications in DP. This shortcoming is also contributed by the potential limitation in sharing histopathological sections. Although the interobserver variability and subjectivity in the evaluations of pathologists also indicate the uncertainty of "ground truth" in this aspect, the best measure to overcome this obstacle may be multicenter evaluations that include more than one pathologist and dataset. From the perspective of GC, the lack of external validation in a substantial number of studies for AI applications may limit the practical use of AI.

Regulation of AI

Although appropriate regulations are necessary for the safe and effective use of AI in pathology, as highlighted by Allen[74], regulatory approval should be structured to define the risk-benefit balance, reduce potential harm, produce appropriate verification standards, and encourage innovation. On the other hand, the presence of various challenges should not be ignored in this regard.

Various regulatory authorities [such as the FDA, Centers for Medicare and Medicaid Services (CMS), and the European Union Conformité Européenne (EUCE)] are not yet fully prepared for the implementation of AI applications in clinical medicine. As a result, AI-based devices are being controlled by old and potentially outdated guidelines for testing medical devices.

Currently, in the United States, the FDA is working on new regulations to make AI-based devices safer and more effective[75]. On the other hand, appropriate validation for all laboratory tests using human tissue prior to clinical application is required by CMS regardless of FDA approval, and this organization has no specific regulations to validate AI applications. Furthermore, the EUCE reported that in vitro diagnostic medical device directives will be replaced by in vitro diagnostic regulations in May 2022[76]. In addition, it is necessary to take into account the regulatory trends of the country where AI is implemented.

Implementation

The implementation of AI models in daily pathology practice depends on meeting specific requirements by overcoming various challenges. First, a laboratory infrastructure equipped to enable AI applications in a time frame that does not interfere with patient care is essential. Currently, many pathology laboratories only use tissue sections for diagnostic evaluations. However, the implementation of AI models will require new DP-related equipment, software, a specific data management system, data storage facilities, and, more importantly, a substantial investment to cover their cost[77]. In addition, an institutional IT platform is required to enable practitioners to operate on-site and cloud-based computing systems. Thus, DP applications may require significant investment, hindering the implementation of these technologies. It has been demonstrated that augmented microscopy directly connected to the cloud network service can solve the whole slide scanner setup problem[78]. The cloud-based AI application developed by GOOGLE can also aid in the search for morphologically similar features in a target image, regardless of the annotation status[79].

The relative inexperience of pathologists with AI-based technologies should not be overlooked. Therefore, pathologists need to improve their knowledge of both the installation of DP systems and the application of AI. Another problem is that, given the reported performance of some algorithms, automated AI models are believed to outperform pathologists, causing pathologists to be hesitant about these applications[79-81]. However, current results suggest that AI models are more likely to help improve the overall quality of pathological diagnosis and provide relevant additional information rather than replacing pathologists[82,83]. Indeed, there will always be a need for pathologists to audit technologies and control systems in AI implementation. Therefore, pathologists must be aware of the long-term risk-benefit balance of AI applications[84]. Since current DL-based AI applications lack interpretability, it may be helpful to develop AI solutions that end-users can interpret, thus providing them with detailed explanations of how their predictions are made. Although DL's "black box" problem has not been fully resolved, several solutions have been reported, such as constructing an interpretable model, generating an attention heatmap, and constructing an external interpretive model[85-88].

While AI assistance in pathological diagnosis may reduce the opportunities for learning diagnostic skills during pathology training, resident pathologists should be trained and encouraged to learn the utility, limitations, and pitfalls of AI application as an adjunct method to improve the quality and precision of clinical diagnoses. Therefore, some reforms may be required in pathology training, starting with medical education followed by a pathology education program to address a more accurate and safer implementation of AI in pathology practice[84].

Like other clinical tests, quality assurance is an important issue for the effective use of AI in DP, and consequently, a scheme of external quality assurance for applications should be urgently prepared for its implementation. Furthermore, laboratory staff should be aware of the quality management system.

Beyond all this, the legal implications of signing a report prepared by a pathologist using AI should not be ignored. Therefore, to include AI findings in a pathological report, the performance of the algorithm must be assured. This legal issue also supports the notion that AI cannot replace pathologists but that AI can be used to support pathologists in clinical trials.

CONCLUSION

AI-based approaches have the potential to contribute to the pathological diagnosis and staging of GC by improving workflows, eliminating simple errors, and increasing diagnostic reproducibility. It is also the case that it encourages biomarker discovery by revealing impossible predictions using traditional visual methods. However, there are many hurdles to overcome, including infrastructure and the generalization of algorithms. Overcoming these obstacles requires the efforts of computer scientists, pathologists, and clinicians, who will deal with each challenge separately and cooperate in harmony. In this way, AI applications that are user-friendly, explainable, manageable, and cost-effective can play a crucial role in the development of pathological assessments to be used in the diagnosis, prognosis, and treatment of GC.

Footnotes

Provenance and peer review: Unsolicited article; Externally peer reviewed.

Peer-review model: Single blind

Specialty type: Pathology

Country/Territory of origin: Turkey

Peer-review report’s scientific quality classification

Grade A (Excellent): 0

Grade B (Very good): 0

Grade C (Good): 0

Grade D (Fair): D, D

Grade E (Poor): 0

P-Reviewer: Balakrishnan DS, Li ZS S-Editor: Liu M L-Editor: Wang TQ P-Editor: Liu M

References
1.  Goldblum JR, Lamps LW, McKenney JK, Myers JL.   Rosai and Ackerman’s Surgical Pathology. 11th ed. Amsterdam: Elsevier Health Sciences, 2018: 1-20.  [PubMed]  [DOI]  [Cited in This Article: ]
2.  Niazi MKK, Parwani AV, Gurcan MN. Digital pathology and artificial intelligence. Lancet Oncol. 2019;20:e253-e261.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 507]  [Cited by in F6Publishing: 457]  [Article Influence: 91.4]  [Reference Citation Analysis (0)]
3.  Abels E, Pantanowitz L, Aeffner F, Zarella MD, van der Laak J, Bui MM, Vemuri VN, Parwani AV, Gibbs J, Agosto-Arroyo E, Beck AH, Kozlowski C. Computational pathology definitions, best practices, and recommendations for regulatory guidance: a white paper from the Digital Pathology Association. J Pathol. 2019;249:286-294.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 134]  [Cited by in F6Publishing: 190]  [Article Influence: 38.0]  [Reference Citation Analysis (0)]
4.  Food and Drug Administration  IntelliSite Pathology Solution (PIPS, Philips Medical Systems) [cited 29 September 2020]. Available from: https://www.fda.gov/drugs/resources-informationapproved-drugs/intellisite-pathology-solution-pips-philips-medical-systems.  [PubMed]  [DOI]  [Cited in This Article: ]
5.  Dangott B, Parwani A. Whole slide imaging for teleconsultation and clinical use. J Pathol Inform. 2010;1.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 14]  [Cited by in F6Publishing: 17]  [Article Influence: 1.2]  [Reference Citation Analysis (0)]
6.  Evans AJ, Depeiza N, Allen SG, Fraser K, Shirley S, Chetty R. Use of whole slide imaging (WSI) for distance teaching. J Clin Pathol. 2021;74:425-428.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 12]  [Cited by in F6Publishing: 16]  [Article Influence: 4.0]  [Reference Citation Analysis (0)]
7.  Saillard C, Schmauch B, Laifa O, Moarii M, Toldo S, Zaslavskiy M, Pronier E, Laurent A, Amaddeo G, Regnault H, Sommacale D, Ziol M, Pawlotsky JM, Mulé S, Luciani A, Wainrib G, Clozel T, Courtiol P, Calderaro J. Predicting Survival After Hepatocellular Carcinoma Resection Using Deep Learning on Histological Slides. Hepatology. 2020;72:2000-2013.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 95]  [Cited by in F6Publishing: 120]  [Article Influence: 30.0]  [Reference Citation Analysis (0)]
8.  Calderaro J, Kather JN. Artificial intelligence-based pathology for gastrointestinal and hepatobiliary cancers. Gut. 2021;70:1183-1193.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 40]  [Cited by in F6Publishing: 48]  [Article Influence: 16.0]  [Reference Citation Analysis (0)]
9.  Courtiol P, Maussion C, Moarii M, Pronier E, Pilcer S, Sefta M, Manceron P, Toldo S, Zaslavskiy M, Le Stang N, Girard N, Elemento O, Nicholson AG, Blay JY, Galateau-Sallé F, Wainrib G, Clozel T. Deep learning-based classification of mesothelioma improves prediction of patient outcome. Nat Med. 2019;25:1519-1525.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 184]  [Cited by in F6Publishing: 228]  [Article Influence: 45.6]  [Reference Citation Analysis (0)]
10.  Bera K, Schalper KA, Rimm DL, Velcheti V, Madabhushi A. Artificial intelligence in digital pathology - new tools for diagnosis and precision oncology. Nat Rev Clin Oncol. 2019;16:703-715.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 660]  [Cited by in F6Publishing: 608]  [Article Influence: 121.6]  [Reference Citation Analysis (0)]
11.  Rashidi HH, Tran NK, Betts EV, Howell LP, Green R. Artificial Intelligence and Machine Learning in Pathology: The Present Landscape of Supervised Methods. Acad Pathol. 2019;6:2374289519873088.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 177]  [Cited by in F6Publishing: 134]  [Article Influence: 26.8]  [Reference Citation Analysis (0)]
12.  Saxena S, Gyanchandani M. Machine Learning Methods for Computer-Aided Breast Cancer Diagnosis Using Histopathology: A Narrative Review. J Med Imaging Radiat Sci. 2020;51:182-193.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 24]  [Cited by in F6Publishing: 24]  [Article Influence: 4.8]  [Reference Citation Analysis (0)]
13.  Gurcan MN, Boucheron LE, Can A, Madabhushi A, Rajpoot NM, Yener B. Histopathological image analysis: a review. IEEE Rev Biomed Eng. 2009;2:147-171.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1377]  [Cited by in F6Publishing: 810]  [Article Influence: 54.0]  [Reference Citation Analysis (0)]
14.  Wang X, Chen H, Gan C, Lin H, Dou Q, Tsougenis E, Huang Q, Cai M, Heng PA. Weakly Supervised Deep Learning for Whole Slide Lung Cancer Image Analysis. IEEE Trans Cybern. 2020;50:3950-3962.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 119]  [Cited by in F6Publishing: 114]  [Article Influence: 28.5]  [Reference Citation Analysis (0)]
15.  Silva-Rodríguez J, Colomer A, Naranjo V. WeGleNet: A weakly-supervised convolutional neural network for the semantic segmentation of Gleason grades in prostate histology images. Comput Med Imaging Graph. 2021;88:101846.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 8]  [Cited by in F6Publishing: 9]  [Article Influence: 3.0]  [Reference Citation Analysis (0)]
16.  LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436-444.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 36149]  [Cited by in F6Publishing: 17272]  [Article Influence: 1919.1]  [Reference Citation Analysis (0)]
17.  Hinton GE, Salakhutdinov RR. Reducing the dimensionality of data with neural networks. Science. 2006;313:504-507.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 11028]  [Cited by in F6Publishing: 3844]  [Article Influence: 213.6]  [Reference Citation Analysis (1)]
18.  Niu PH, Zhao LL, Wu HL, Zhao DB, Chen YT. Artificial intelligence in gastric cancer: Application and future perspectives. World J Gastroenterol. 2020;26:5408-5419.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in CrossRef: 56]  [Cited by in F6Publishing: 51]  [Article Influence: 12.8]  [Reference Citation Analysis (1)]
19.  Thakur N, Yoon H, Chong Y. Current Trends of Artificial Intelligence for Colorectal Cancer Pathology Image Analysis: A Systematic Review. Cancers (Basel). 2020;12.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 52]  [Cited by in F6Publishing: 41]  [Article Influence: 10.3]  [Reference Citation Analysis (0)]
20.  Khan A, Nawaz U, Ulhaq A, Robinson RW. Real-time plant health assessment via implementing cloud-based scalable transfer learning on AWS DeepLens. PLoS One. 2020;15:e0243243.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 21]  [Cited by in F6Publishing: 8]  [Article Influence: 2.0]  [Reference Citation Analysis (0)]
21.  Fuchs TJ, Wild PJ, Moch H, Buhmann JM. Computational pathology analysis of tissue microarrays predicts survival of renal clear cell carcinoma patients. Med Image Comput Comput Assist Interv. 2008;11:1-8.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 22]  [Cited by in F6Publishing: 37]  [Article Influence: 2.3]  [Reference Citation Analysis (0)]
22.  Proscia  Proscia digital pathology. [cited 15 March 2021]. Available from: https://proscia.com.  [PubMed]  [DOI]  [Cited in This Article: ]
23.  Lens Deep  Digital pathology cloud platform. [cited 15 March 2021]. Available from: https://www.deeplens.ai.  [PubMed]  [DOI]  [Cited in This Article: ]
24.  PathAI  PathAI. [cited 15 March 2021]. Available from: https://www.pathai.com/.  [PubMed]  [DOI]  [Cited in This Article: ]
25.  Aifora  WebMicroscope. Big pictures. Deep Diagnosis. [cited 15 March 2021]. Available from: https://www.aiforia.com/.  [PubMed]  [DOI]  [Cited in This Article: ]
26.  Pantanowitz L, Sinard JH, Henricks WH, Fatheree LA, Carter AB, Contis L, Beckwith BA, Evans AJ, Lal A, Parwani AV; College of American Pathologists Pathology and Laboratory Quality Center. Validating whole slide imaging for diagnostic purposes in pathology: guideline from the College of American Pathologists Pathology and Laboratory Quality Center. Arch Pathol Lab Med. 2013;137:1710-1722.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 441]  [Cited by in F6Publishing: 391]  [Article Influence: 35.5]  [Reference Citation Analysis (0)]
27.  Cheng CL, Azhar R, Sng SH, Chua YQ, Hwang JS, Chin JP, Seah WK, Loke JC, Ang RH, Tan PH. Enabling digital pathology in the diagnostic setting: navigating through the implementation journey in an academic medical centre. J Clin Pathol. 2016;69:784-792.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 47]  [Cited by in F6Publishing: 48]  [Article Influence: 6.0]  [Reference Citation Analysis (0)]
28.  Baxi V, Edwards R, Montalto M, Saha S. Digital pathology and artificial intelligence in translational medicine and clinical practice. Mod Pathol. 2021;.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 55]  [Cited by in F6Publishing: 137]  [Article Influence: 68.5]  [Reference Citation Analysis (0)]
29.  Bernstam EV, Shireman PK, Meric-Bernstam F, N Zozus M, Jiang X, Brimhall BB, Windham AK, Schmidt S, Visweswaran S, Ye Y, Goodrum H, Ling Y, Barapatre S, Becich MJ. Artificial intelligence in clinical and translational science: Successes, challenges and opportunities. Clin Transl Sci. 2021 epub ahead of print.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 12]  [Cited by in F6Publishing: 8]  [Article Influence: 4.0]  [Reference Citation Analysis (0)]
30.  Yoshida H, Kiyuna T. Requirements for implementation of artificial intelligence in the practice of gastrointestinal pathology. World J Gastroenterol. 2021;27:2818-2833.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in CrossRef: 17]  [Cited by in F6Publishing: 8]  [Article Influence: 2.7]  [Reference Citation Analysis (0)]
31.  Sharma H, Zerbe N, Klempert I, Hellwich O, Hufnagl P. Deep convolutional neural networks for automatic classification of gastric carcinoma using whole slide images in digital histopathology. Comput Med Imaging Graph. 2017;61:2-13.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 176]  [Cited by in F6Publishing: 148]  [Article Influence: 21.1]  [Reference Citation Analysis (0)]
32.  Yoshida H, Shimazu T, Kiyuna T, Marugame A, Yamashita Y, Cosatto E, Taniguchi H, Sekine S, Ochiai A. Automated histological classification of whole-slide images of gastric biopsy specimens. Gastric Cancer. 2018;21:249-257.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 63]  [Cited by in F6Publishing: 62]  [Article Influence: 10.3]  [Reference Citation Analysis (0)]
33.  Iizuka O, Kanavati F, Kato K, Rambeau M, Arihiro K, Tsuneki M. Deep Learning Models for Histopathological Classification of Gastric and Colonic Epithelial Tumours. Sci Rep. 2020;10:1504.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 132]  [Cited by in F6Publishing: 161]  [Article Influence: 40.3]  [Reference Citation Analysis (0)]
34.  Korbar B, Olofson AM, Miraflor AP, Nicka CM, Suriawinata MA, Torresani L, Suriawinata AA, Hassanpour S. Deep Learning for Classification of Colorectal Polyps on Whole-slide Images. J Pathol Inform. 2017;8:30.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 140]  [Cited by in F6Publishing: 146]  [Article Influence: 20.9]  [Reference Citation Analysis (0)]
35.  Wei JW, Suriawinata AA, Vaickus LJ, Ren B, Liu X, Lisovsky M, Tomita N, Abdollahi B, Kim AS, Snover DC, Baron JA, Barry EL, Hassanpour S. Evaluation of a Deep Neural Network for Automated Classification of Colorectal Polyps on Histopathologic Slides. JAMA Netw Open. 2020;3:e203398.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 49]  [Cited by in F6Publishing: 40]  [Article Influence: 10.0]  [Reference Citation Analysis (0)]
36.  Lu F, Chen ZK, Yuan X, Li Q, Du ZD, Luo L, Zhang FY.   MMHG: Multi-modal hypergraph learning for overall survival after D2 gastrectomy for gastric cancer. Proceedings of the 15th Intl Conf on Dependable, Autonomic and Secure Computing, 15th Intl Conf on Pervasive Intelligence and Computing, 3rd Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress; 2017 Nov 6-10; Orlando, FL, USA. California: IEEE Computer Society, 2017: 164-169.  [PubMed]  [DOI]  [Cited in This Article: ]
37.  Jiang Y, Xie J, Han Z, Liu W, Xi S, Huang L, Huang W, Lin T, Zhao L, Hu Y, Yu J, Zhang Q, Li T, Cai S, Li G. Immunomarker Support Vector Machine Classifier for Prediction of Gastric Cancer Survival and Adjuvant Chemotherapeutic Benefit. Clin Cancer Res. 2018;24:5574-5584.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 71]  [Cited by in F6Publishing: 80]  [Article Influence: 13.3]  [Reference Citation Analysis (0)]
38.  Korhani Kangi A, Bahrampour A. Predicting the Survival of Gastric Cancer Patients Using Artificial and Bayesian Neural Networks. Asian Pac J Cancer Prev. 2018;19:487-490.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 14]  [Reference Citation Analysis (0)]
39.  Zhang W, Fang M, Dong D, Wang X, Ke X, Zhang L, Hu C, Guo L, Guan X, Zhou J, Shan X, Tian J. Development and validation of a CT-based radiomic nomogram for preoperative prediction of early recurrence in advanced gastric cancer. Radiother Oncol. 2020;145:13-20.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 38]  [Cited by in F6Publishing: 80]  [Article Influence: 20.0]  [Reference Citation Analysis (0)]
40.  Liu B, Tan J, Wang X, Liu X. Identification of recurrent risk-related genes and establishment of support vector machine prediction model for gastric cancer. Neoplasma. 2018;65:360-366.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 8]  [Cited by in F6Publishing: 7]  [Article Influence: 1.4]  [Reference Citation Analysis (0)]
41.  Bollschweiler EH, Mönig SP, Hensler K, Baldus SE, Maruyama K, Hölscher AH. Artificial neural network for prediction of lymph node metastases in gastric cancer: a phase II diagnostic study. Ann Surg Oncol. 2004;11:506-511.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 46]  [Cited by in F6Publishing: 51]  [Article Influence: 2.6]  [Reference Citation Analysis (0)]
42.  Hensler K, Waschulzik T, Mönig SP, Maruyama K, Hölscher AH, Bollschweiler E. Quality-assured Efficient Engineering of Feedforward Neural Networks (QUEEN) -- pretherapeutic estimation of lymph node status in patients with gastric carcinoma. Methods Inf Med. 2005;44:647-654.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 5]  [Cited by in F6Publishing: 4]  [Article Influence: 0.2]  [Reference Citation Analysis (0)]
43.  Jagric T, Potrc S, Jagric T. Prediction of liver metastases after gastric cancer resection with the use of learning vector quantization neural networks. Dig Dis Sci. 2010;55:3252-3261.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 9]  [Cited by in F6Publishing: 10]  [Article Influence: 0.7]  [Reference Citation Analysis (0)]
44.  García E, Hermoza R, Beltran-Castanon C, Cano L, Castillo M, Castanneda C. Automatic lymphocyte detection on gastric cancer ihc images using deep learning. IEEE. 2017;200-204.  [PubMed]  [DOI]  [Cited in This Article: ]
45.  Bychkov D, Linder N, Turkki R, Nordling S, Kovanen PE, Verrill C, Walliander M, Lundin M, Haglund C, Lundin J. Deep learningbased tissue analysis predicts outcome in colorectal cancer. Sci Rep. 2018;8:3395.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 427]  [Cited by in F6Publishing: 297]  [Article Influence: 49.5]  [Reference Citation Analysis (0)]
46.  Kather JN, Krisam J, Charoentong P, Luedde T, Herpel E, Weis CA, Gaiser T, Marx A, Valous NA, Ferber D, Jansen L, Reyes-Aldasoro CC, Zörnig I, Jäger D, Brenner H, Chang-Claude J, Hoffmeister M, Halama N. Predicting survival from colorectal cancer histology slides using deep learning: A retrospective multicenter study. PLoS Med. 2019;16:e1002730.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 555]  [Cited by in F6Publishing: 353]  [Article Influence: 70.6]  [Reference Citation Analysis (0)]
47.  Kather JN, Pearson AT, Halama N, Jäger D, Krause J, Loosen SH, Marx A, Boor P, Tacke F, Neumann UP, Grabsch HI, Yoshikawa T, Brenner H, Chang-Claude J, Hoffmeister M, Trautwein C, Luedde T. Deep learning can predict microsatellite instability directly from histology in gastrointestinal cancer. Nat Med. 2019;25:1054-1056.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 756]  [Cited by in F6Publishing: 564]  [Article Influence: 112.8]  [Reference Citation Analysis (0)]
48.  Shapcott M, Hewitt KJ, Rajpoot N. Deep Learning With Sampling in Colon Cancer Histology. Front Bioeng Biotechnol. 2019;7:52.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 33]  [Cited by in F6Publishing: 33]  [Article Influence: 6.6]  [Reference Citation Analysis (0)]
49.  Swiderska-Chadaj Z, Pinckaers H, van Rijthoven M, Balkenhol M, Melnikova M, Geessink O, Manson Q, Sherman M, Polonia A, Parry J, Abubakar M, Litjens G, van der Laak J, Ciompi F. Learning to detect lymphocytes in immunohistochemistry with deep learning. Med Image Anal. 2019;58:101547.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 57]  [Cited by in F6Publishing: 65]  [Article Influence: 13.0]  [Reference Citation Analysis (0)]
50.  Weis CA, Kather JN, Melchers S, Al-Ahmdi H, Pollheimer MJ, Langner C, Gaiser T. Automatic evaluation of tumor budding in immunohistochemically stained colorectal carcinomas and correlation to clinical outcome. Diagn Pathol. 2018;13:64.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 23]  [Cited by in F6Publishing: 25]  [Article Influence: 4.2]  [Reference Citation Analysis (0)]
51.  Le DT, Uram JN, Wang H, Bartlett BR, Kemberling H, Eyring AD, Skora AD, Luber BS, Azad NS, Laheru D, Biedrzycki B, Donehower RC, Zaheer A, Fisher GA, Crocenzi TS, Lee JJ, Duffy SM, Goldberg RM, de la Chapelle A, Koshiji M, Bhaijee F, Huebner T, Hruban RH, Wood LD, Cuka N, Pardoll DM, Papadopoulos N, Kinzler KW, Zhou S, Cornish TC, Taube JM, Anders RA, Eshleman JR, Vogelstein B, Diaz LA Jr. PD-1 Blockade in Tumors with Mismatch-Repair Deficiency. N Engl J Med. 2015;372:2509-2520.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 6096]  [Cited by in F6Publishing: 6542]  [Article Influence: 726.9]  [Reference Citation Analysis (0)]
52.  Kather JN, Halama N, Jaeger D. Genomics and emerging biomarkers for immunotherapy of colorectal cancer. Semin Cancer Biol. 2018;52:189-197.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 70]  [Cited by in F6Publishing: 76]  [Article Influence: 12.7]  [Reference Citation Analysis (0)]
53.  Mandal R, Samstein RM, Lee KW, Havel JJ, Wang H, Krishna C, Sabio EY, Makarov V, Kuo F, Blecua P, Ramaswamy AT, Durham JN, Bartlett B, Ma X, Srivastava R, Middha S, Zehir A, Hechtman JF, Morris LG, Weinhold N, Riaz N, Le DT, Diaz LA Jr, Chan TA. Genetic diversity of tumors with mismatch repair deficiency influences anti-PD-1 immunotherapy response. Science. 2019;364:485-491.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 271]  [Cited by in F6Publishing: 340]  [Article Influence: 68.0]  [Reference Citation Analysis (0)]
54.  Lynch HT, Snyder CL, Shaw TG, Heinen CD, Hitchins MP. Milestones of Lynch syndrome: 1895-2015. Nat Rev Cancer. 2015;15:181-194.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 495]  [Cited by in F6Publishing: 498]  [Article Influence: 55.3]  [Reference Citation Analysis (0)]
55.  Echle A, Grabsch HI, Quirke P, van den Brandt PA, West NP, Hutchins GGA, Heij LR, Tan X, Richman SD, Krause J, Alwers E, Jenniskens J, Offermans K, Gray R, Brenner H, Chang-Claude J, Trautwein C, Pearson AT, Boor P, Luedde T, Gaisa NT, Hoffmeister M, Kather JN. Clinical-Grade Detection of Microsatellite Instability in Colorectal Tumors by Deep Learning. Gastroenterology. 2020;159:1406-1416.e11.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 210]  [Cited by in F6Publishing: 162]  [Article Influence: 40.5]  [Reference Citation Analysis (0)]
56.  Kather JN, Heij LR, Grabsch HI, Loeffler C, Echle A, Muti HS, Krause J, Niehues JM, Sommer KAJ, Bankhead P, Kooreman LFS, Schulte JJ, Cipriani NA, Buelow RD, Boor P, Ortiz-Brüchle NN, Hanby AM, Speirs V, Kochanny S, Patnaik A, Srisuwananukorn A, Brenner H, Hoffmeister M, van den Brandt PA, Jäger D, Trautwein C, Pearson AT, Luedde T. Pan-cancer image-based detection of clinically actionable genetic alterations. Nat Cancer. 2020;1:789-799.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 174]  [Cited by in F6Publishing: 235]  [Article Influence: 58.8]  [Reference Citation Analysis (0)]
57.  Fu Y, Jung AW, Torne RV, Gonzalez S, Vöhringer H, Shmatko A, Yates LR, Jimenez-Linan M, Moore L, Gerstung M. Pan-cancer computational histopathology reveals mutations, tumor composition and prognosis. Nat Cancer. 2020;1:800-810.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 168]  [Cited by in F6Publishing: 171]  [Article Influence: 42.8]  [Reference Citation Analysis (0)]
58.  Colling R, Pitman H, Oien K, Rajpoot N, Macklin P; CM-Path AI in Histopathology Working Group, Snead D, Sackville T, Verrill C. Artificial intelligence in digital pathology: a roadmap to routine use in clinical practice. J Pathol. 2019;249:143-150.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 104]  [Cited by in F6Publishing: 119]  [Article Influence: 23.8]  [Reference Citation Analysis (0)]
59.  Kotsenas AL, Balthazar P, Andrews D, Geis JR, Cook TS. Rethinking Patient Consent in the Era of Artificial Intelligence and Big Data. J Am Coll Radiol. 2021;18:180-184.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 13]  [Cited by in F6Publishing: 11]  [Article Influence: 3.7]  [Reference Citation Analysis (0)]
60.  Campanella G, Hanna MG, Geneslaw L, Miraflor A, Werneck Krauss Silva V, Busam KJ, Brogi E, Reuter VE, Klimstra DS, Fuchs TJ. Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. Nat Med. 2019;25:1301-1309.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 710]  [Cited by in F6Publishing: 891]  [Article Influence: 178.2]  [Reference Citation Analysis (0)]
61.  Jones AD, Graff JP, Darrow M, Borowsky A, Olson KA, Gandour-Edwards R, Datta Mitra A, Wei D, Gao G, Durbin-Johnson B, Rashidi HH. Impact of pre-analytical variables on deep learning accuracy in histopathology. Histopathology. 2019;75:39-53.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 19]  [Cited by in F6Publishing: 24]  [Article Influence: 4.8]  [Reference Citation Analysis (0)]
62.  Hipp JD, Sica J, McKenna B, Monaco J, Madabhushi A, Cheng J, Balis UJ. The need for the pathology community to sponsor a whole slide imaging repository with technical guidance from the pathology informatics community. J Pathol Inform. 2011;2:31.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 7]  [Cited by in F6Publishing: 7]  [Article Influence: 0.5]  [Reference Citation Analysis (0)]
63.  Hartman DJ, Van Der Laak JAWM, Gurcan MN, Pantanowitz L. Value of Public Challenges for the Development of Pathology Deep Learning Algorithms. J Pathol Inform. 2020;11:7.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 16]  [Cited by in F6Publishing: 20]  [Article Influence: 5.0]  [Reference Citation Analysis (0)]
64.  Cooper LA, Demicco EG, Saltz JH, Powell RT, Rao A, Lazar AJ. PanCancer insights from The Cancer Genome Atlas: the pathologist's perspective. J Pathol. 2018;244:512-524.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 136]  [Cited by in F6Publishing: 106]  [Article Influence: 17.7]  [Reference Citation Analysis (0)]
65.  Inoue T, Yagi Y. Color standardization and optimization in whole slide imaging. Clin Diagn Pathol. 2020;4.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 8]  [Cited by in F6Publishing: 10]  [Article Influence: 2.5]  [Reference Citation Analysis (0)]
66.  Yoshida H, Yokota H, Singh R, Kiyuna T, Yamaguchi M, Kikuchi S, Yagi Y, Ochiai A. Meeting Report: The International Workshop on Harmonization and Standardization of Digital Pathology Image, Held on April 4, 2019 in Tokyo. Pathobiology. 2019;86:322-324.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 5]  [Cited by in F6Publishing: 4]  [Article Influence: 0.8]  [Reference Citation Analysis (0)]
67.  Dietterich TG, Lathrop RH, Lozano-Pérez T. Solving the multiple instance problem with axis- parallel rectangles. Artif Intell. 1997;89:31-71.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1449]  [Cited by in F6Publishing: 1483]  [Article Influence: 54.9]  [Reference Citation Analysis (0)]
68.  Janowczyk A, Basavanhally A, Madabhushi A. Stain Normalization using Sparse AutoEncoders (StaNoSA): Application to digital pathology. Comput Med Imaging Graph. 2017;57:50-61.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 108]  [Cited by in F6Publishing: 116]  [Article Influence: 14.5]  [Reference Citation Analysis (0)]
69.  Vahadane A, Peng T, Sethi A, Albarqouni S, Wang L, Baust M, Steiger K, Schlitter AM, Esposito I, Navab N. Structure-Preserving Color Normalization and Sparse Stain Separation for Histological Images. IEEE Trans Med Imaging. 2016;35:1962-1971.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 341]  [Cited by in F6Publishing: 234]  [Article Influence: 29.3]  [Reference Citation Analysis (0)]
70.  Janowczyk A, Zuo R, Gilmore H, Feldman M, Madabhushi A. HistoQC: An Open-Source Quality Control Tool for Digital Pathology Slides. JCO Clin Cancer Inform. 2019;3:1-7.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 94]  [Cited by in F6Publishing: 114]  [Article Influence: 28.5]  [Reference Citation Analysis (0)]
71.  Senaras C, Niazi MKK, Lozanski G, Gurcan MN. DeepFocus: Detection of out-of-focus regions in whole slide digital images using deep learning. PLoS One. 2018;13:e0205387.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 73]  [Cited by in F6Publishing: 55]  [Article Influence: 9.2]  [Reference Citation Analysis (0)]
72.  Cosatto E, Laquerre PF, Malon C, Graf HP, Saito A, Kiyuna T, Marugame A, Kamijo K.   Automated gastric cancer diagnosis on H and E-stained sections; training a classifier on a large scale with multiple instance machine learning. Proceedings of SPIE - Progress in Biomedical Optics and Imaging, MI: 2013.  [PubMed]  [DOI]  [Cited in This Article: ]
73.  Mattocks CJ, Morris MA, Matthijs G, Swinnen E, Corveleyn A, Dequeker E, Müller CR, Pratt V, Wallace A; EuroGentest Validation Group. A standardized framework for the validation and verification of clinical molecular genetic tests. Eur J Hum Genet. 2010;18:1276-1288.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 141]  [Cited by in F6Publishing: 128]  [Article Influence: 9.1]  [Reference Citation Analysis (0)]
74.  Allen TC. Regulating Artificial Intelligence for a Successful Pathology Future. Arch Pathol Lab Med. 2019;143:1175-1179.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 23]  [Cited by in F6Publishing: 23]  [Article Influence: 4.6]  [Reference Citation Analysis (0)]
75.  United States Food and Drug Administration  Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD). [cited 7 January 2021]. Available from: https://www.fda.gov/media/122535/download.  [PubMed]  [DOI]  [Cited in This Article: ]
76.  European Commission  Medical Devices – Sector. [cited 7 January 2021]. Available from: https://ec.europa.eu/growth/sectors/medical-devices_en.  [PubMed]  [DOI]  [Cited in This Article: ]
77.  Retamero JA, Aneiros-Fernandez J, Del Moral RG. Complete Digital Pathology for Routine Histopathology Diagnosis in a Multicenter Hospital Network. Arch Pathol Lab Med. 2020;144:221-228.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 63]  [Cited by in F6Publishing: 78]  [Article Influence: 15.6]  [Reference Citation Analysis (0)]
78.  Chen PC, Gadepalli K, MacDonald R, Liu Y, Kadowaki S, Nagpal K, Kohlberger T, Dean J, Corrado GS, Hipp JD, Mermel CH, Stumpe MC. An augmented reality microscope with real-time artificial intelligence integration for cancer diagnosis. Nat Med. 2019;25:1453-1457.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 109]  [Cited by in F6Publishing: 110]  [Article Influence: 22.0]  [Reference Citation Analysis (0)]
79.  Hegde N, Hipp JD, Liu Y, Emmert-Buck M, Reif E, Smilkov D, Terry M, Cai CJ, Amin MB, Mermel CH, Nelson PQ, Peng LH, Corrado GS, Stumpe MC. Similar image search for histopathology: SMILY. NPJ Digit Med. 2019;2:56.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 76]  [Cited by in F6Publishing: 52]  [Article Influence: 10.4]  [Reference Citation Analysis (0)]
80.  Deshpande S, Minhas F, Graham S, Rajpoot N.   SAFRON: Stitching across the frontier for generating colorectal cancer histology images. [cited 15 March 2021]. Available from: http:// arxiv. org/abs/ 2008. 04526.  [PubMed]  [DOI]  [Cited in This Article: ]
81.  Hekler A, Utikal JS, Enk AH, Solass W, Schmitt M, Klode J, Schadendorf D, Sondermann W, Franklin C, Bestvater F, Flaig MJ, Krahl D, von Kalle C, Fröhling S, Brinker TJ. Deep learning outperformed 11 pathologists in the classification of histopathological melanoma images. Eur J Cancer. 2019;118:91-96.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 117]  [Cited by in F6Publishing: 74]  [Article Influence: 14.8]  [Reference Citation Analysis (0)]
82.  Holzinger A, Malle B, Kieseberg P, Roth PM, Müller H, Reihs R, Zatloukal K.   Towards the augmented pathologist: Challenges of Explainable-AI in digital pathology. [cited 15 March 2021]. Available from: http:// arxiv. org/ abs/ 1712. 06657.  [PubMed]  [DOI]  [Cited in This Article: ]
83.  Kiani A, Uyumazturk B, Rajpurkar P, Wang A, Gao R, Jones E, Yu Y, Langlotz CP, Ball RL, Montine TJ, Martin BA, Berry GJ, Ozawa MG, Hazard FK, Brown RA, Chen SB, Wood M, Allard LS, Ylagan L, Ng AY, Shen J. Impact of a deep learning assistant on the histopathologic classification of liver cancer. NPJ Digit Med. 2020;3:23.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 136]  [Cited by in F6Publishing: 108]  [Article Influence: 27.0]  [Reference Citation Analysis (0)]
84.  Arora A, Arora A. Pathology training in the age of artificial intelligence. J Clin Pathol. 2021;74:73-75.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 3]  [Cited by in F6Publishing: 3]  [Article Influence: 0.8]  [Reference Citation Analysis (0)]
85.  Montavon G, Samek W, Müller KR. Methods for interpreting and understanding deep neural networks. Digit Signal Process. 2018;73:1-15.  [PubMed]  [DOI]  [Cited in This Article: ]
86.  Tosun AB, Pullara F, Becich MJ, Taylor DL, Fine JL, Chennubhotla SC. Explainable AI (xAI) for Anatomic Pathology. Adv Anat Pathol. 2020;27:241-250.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 25]  [Cited by in F6Publishing: 30]  [Article Influence: 7.5]  [Reference Citation Analysis (0)]
87.  Yang JH, Wright SN, Hamblin M, McCloskey D, Alcantar MA, Schrübbers L, Lopatkin AJ, Satish S, Nili A, Palsson BO, Walker GC, Collins JJ. A White-Box Machine Learning Approach for Revealing Antibiotic Mechanisms of Action. Cell. 2019;177:1649-1661.e9.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 153]  [Cited by in F6Publishing: 177]  [Article Influence: 35.4]  [Reference Citation Analysis (0)]
88.  Kuhn DR, Kacker RN, Lei Y, Simos DE.   Combinatorial methods for Explainable AI. Proceedings of the 2020 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW); 2020 Oct 24-28; Porto. IEEE, 2020: 167-170.  [PubMed]  [DOI]  [Cited in This Article: ]
89.  Duraipandian S, Sylvest Bergholt M, Zheng W, Yu Ho K, Teh M, Guan Yeoh K, Bok Yan So J, Shabbir A, Huang Z. Real-time Raman spectroscopy for in vivo, online gastric cancer diagnosis during clinical endoscopic examination. J Biomed Opt. 2012;17:081418.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 95]  [Cited by in F6Publishing: 78]  [Article Influence: 6.5]  [Reference Citation Analysis (0)]
90.  Qu J, Hiruta N, Terai K, Nosato H, Murakawa M, Sakanashi H. Gastric Pathology Image Classification Using Stepwise Fine-Tuning for Deep Neural Networks. J Healthc Eng. 2018;2018:8961781.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 43]  [Cited by in F6Publishing: 34]  [Article Influence: 5.7]  [Reference Citation Analysis (0)]
91.  León F, Gélvez M, Jaimes Z, Gelvez T, Arguello H.   Supervised classification of histopathological images using convolutional neuronal networks for gastric cancer detection. 2019 XXII Symposium on Image, Signal Processing and Artificial Vision (STSIVA); 2019 Apr 24-26; Bucaramanga, Colombia. IEEE, 2019: 1-5.  [PubMed]  [DOI]  [Cited in This Article: ]
92.  Liang Q, Nan Y, Coppola G, Zou K, Sun W, Zhang D, Wang Y, Yu G. Weakly Supervised Biomedical Image Segmentation by Reiterative Learning. IEEE J Biomed Health Inform. 2019;23:1205-1214.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 30]  [Cited by in F6Publishing: 24]  [Article Influence: 4.0]  [Reference Citation Analysis (0)]
93.  Sun M, Zhang G, Dang H, Qi X, Zhou X, Chang Q. Accurate Gastric Cancer Segmentation in Digital Pathology Images Using Deformable Convolution and Multi-Scale Embedding Networks. IEEE Access. 2019;7:75530-75541.  [PubMed]  [DOI]  [Cited in This Article: ]
94.  Tomita N, Abdollahi B, Wei J, Ren B, Suriawinata A, Hassanpour S. Attention-Based Deep Neural Networks for Detection of Cancerous and Precancerous Esophagus Tissue on Histopathological Slides. JAMA Netw Open. 2019;2:e1914645.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 113]  [Cited by in F6Publishing: 77]  [Article Influence: 15.4]  [Reference Citation Analysis (0)]
95.  Wang S, Zhu Y, Yu L, Chen H, Lin H, Wan X, Fan X, Heng PA. RMDL: Recalibrated multi-instance deep learning for whole slide gastric image classification. Med Image Anal. 2019;58:101549.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 68]  [Cited by in F6Publishing: 76]  [Article Influence: 15.2]  [Reference Citation Analysis (0)]
96.  Xu Y, Jia Z, Wang LB, Ai Y, Zhang F, Lai M, Chang EI. Large scale tissue histopathology image classification, segmentation, and visualization via deep convolutional activation features. BMC Bioinformatics. 2017;18:281.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 208]  [Cited by in F6Publishing: 164]  [Article Influence: 23.4]  [Reference Citation Analysis (0)]
97.  Awan R, Sirinukunwattana K, Epstein D, Jefferyes S, Qidwai U, Aftab Z, Mujeeb I, Snead D, Rajpoot N. Glandular Morphometrics for Objective Grading of Colorectal Adenocarcinoma Histology Images. Sci Rep. 2017;7:16852.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 55]  [Cited by in F6Publishing: 62]  [Article Influence: 8.9]  [Reference Citation Analysis (0)]
98.  Haj-Hassan H, Chaddad A, Harkouss Y, Desrosiers C, Toews M, Tanougast C. Classifications of Multispectral Colorectal Cancer Tissues Using Convolution Neural Network. J Pathol Inform. 2017;8:1.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 40]  [Cited by in F6Publishing: 41]  [Article Influence: 5.9]  [Reference Citation Analysis (0)]
99.  Kainz P, Pfeiffer M, Urschler M. Segmentation and classification of colon glands with deep convolutional neural networks and total variation regularization. PeerJ. 2017;5:e3874.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 86]  [Cited by in F6Publishing: 63]  [Article Influence: 9.0]  [Reference Citation Analysis (0)]
100.  Yoshida H, Yamashita Y, Shimazu T, Cosatto E, Kiyuna T, Taniguchi H, Sekine S, Ochiai A. Automated histological classification of whole slide images of colorectal biopsy specimens. Oncotarget. 2017;8:90719-90729.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 26]  [Cited by in F6Publishing: 27]  [Article Influence: 3.9]  [Reference Citation Analysis (0)]
101.  Ponzio F, Macii E, Ficarra E, Di Cataldo S.   Colorectal Cancer Classification using Deep Convolutional Networks-An Experimental Study. Proceedings of the 11th International Joint Conference on Biomedical Engineering Systems and Technologies - Volume 2. Bioimaging, 2018: 58-66.  [PubMed]  [DOI]  [Cited in This Article: ]
102.  Yoon H, Lee J, Oh JE, Kim HR, Lee S, Chang HJ, Sohn DK. Tumor Identification in Colorectal Histology Images Using a Convolutional Neural Network. J Digit Imaging. 2019;32:131-140.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 29]  [Cited by in F6Publishing: 31]  [Article Influence: 7.8]  [Reference Citation Analysis (0)]
103.  Sena P, Fioresi R, Faglioni F, Losi L, Faglioni G, Roncucci L. Deep learning techniques for detecting preneoplastic and neoplastic lesions in human colorectal histological images. Oncol Lett. 2019;18:6101-6107.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 19]  [Cited by in F6Publishing: 17]  [Article Influence: 3.4]  [Reference Citation Analysis (0)]
104.  Geessink OGF, Baidoshvili A, Klaase JM, Ehteshami Bejnordi B, Litjens GJS, van Pelt GW, Mesker WE, Nagtegaal ID, Ciompi F, van der Laak JAWM. Computer aided quantification of intratumoral stroma yields an independent prognosticator in rectal cancer. Cell Oncol (Dordr). 2019;42:331-341.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 46]  [Cited by in F6Publishing: 68]  [Article Influence: 13.6]  [Reference Citation Analysis (0)]
105.  Skrede OJ, De Raedt S, Kleppe A, Hveem TS, Liestøl K, Maddison J, Askautrud HA, Pradhan M, Nesheim JA, Albregtsen F, Farstad IN, Domingo E, Church DN, Nesbakken A, Shepherd NA, Tomlinson I, Kerr R, Novelli M, Kerr DJ, Danielsen HE. Deep learning for prediction of colorectal cancer outcome: a discovery and validation study. Lancet. 2020;395:350-360.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 332]  [Cited by in F6Publishing: 268]  [Article Influence: 67.0]  [Reference Citation Analysis (0)]
106.  Alom M, Yakopcic C, Taha T, Asari V.   Microscopic Nuclei Classification, Segmentation and Detection with improved Deep Convolutional Neural Network (DCNN) Approaches. 2018 Preprint. Available from: arXiv:1811.03447.  [PubMed]  [DOI]  [Cited in This Article: ]
107.  Sirinukunwattana K, Domingo E, Richman SD, Redmond KL, Blake A, Verrill C, Leedham SJ, Chatzipli A, Hardy C, Whalley CM, Wu CH, Beggs AD, McDermott U, Dunne PD, Meade A, Walker SM, Murray GI, Samuel L, Seymour M, Tomlinson I, Quirke P, Maughan T, Rittscher J, Koelzer VH. S: CORT consortium. Image-based consensus molecular subtype (imCMS) classification of colorectal cancer using deep learning. Gut. 2021;70:544-554.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 133]  [Cited by in F6Publishing: 105]  [Article Influence: 35.0]  [Reference Citation Analysis (0)]