Minireviews
Copyright ©The Author(s) 2021.
World J Gastroenterol. Jun 7, 2021; 27(21): 2818-2833
Published online Jun 7, 2021. doi: 10.3748/wjg.v27.i21.2818
Table 1 Artificial intelligence applications in gastric cancer pathology
Ref.
Task
No. of cases/data set
Machine learning method
Performance
Bollschweiler et al[79]Prognosis prediction135 casesANNAccuracy (93%)
Duraipandian et al[80]Tumor classification700 slidesGastricNetAccuracy (100%)
Cosatto et al[65]Tumor classification> 12000 WSIsMILAUC (0.96)
Sharma et al[21]Tumor classification454 casesCNNAccuracy (69% for cancer classification), accuracy (81% for necrosis detection)
Jiang et al[81]Prognosis prediction786 casesSVM classifierAUCs (up to 0.83)
Qu et al[82]Tumor classification9720 imagesDLAUCs (up to 0.97)
Yoshida et al[23]Tumor classification3062 gastric biopsy specimensMLOverall concordance rate (55.6%)
Kather et al[34]Prediction of microsatellite instability1147 cases (gastric and colorectal cancer)Deep residual learningAUC (0.81 for gastric cancer; 0.84 for colorectal cancer)
Garcia et al[30]Tumor classification3257 imagesCNNAccuracy (96.9%)
León et al[83]Tumor classification40 imagesCNNAccuracy (up to 89.7%)
Fu et al[32]Prediction of genomic alterations, gene expression profiling, and immune infiltration> 1000 cases (gastric, colorectal, esophageal, and liver cancers)Neural networks.AUC (0.9) for BRAF mutations prediction in thyroid cancers
Liang et al[84]Tumor classification1900 imagesDLAccuracy (91.1%)
Sun et al[85]Tumor classification500 imagesDLAccuracy (91.6%)
Tomita et al[24]Tumor classification502 cases (esophageal adenocarcinoma and Barret esophagus)Attention-based deep learningAccuracy (83%)
Wang et al[86]Tumor classification608 imagesRecalibrated multi-instance deep learningAccuracy (86.5%)
Iizuka et al[22]Tumor classification1746 biopsy WSIsCNN, RNNAUCs (up to 0.98), accuracy (95.6%)
Kather et al[33]Prediction of genetic alterations and gene expression signatures> 1000 cases (gastric, colorectal, and pancreatic cancer)Neural networksAUC (up to 0.8)
Table 2 Artificial intelligence applications in colorectal cancer pathology
Ref.
Task
No. of cases/data set
Machine learning method
Performance
Xu et al[38]Tumor classification: 6 classes (NL/ADC/MC/SC/PC/CCTA)717 patchesAlexNetAccuracy (97.5%)
Awan et al[87]Tumor classification: Normal/Low-grade cancer/High-grade cancer454 casesNeural networksAccuracy (97%, for 2-class; 91%, for 3-class)
Haj-Hassan et al[37]Tumor classification: 3 classes (NL/AD/ADC)30 multispectral image patchesCNNAccuracy (99.2%)
Kainz et al[88]Tumor classification: Benign/Malignant165 imagesCNN (LeNet-5)Accuracy (95%-98%)
Korbar et al[36]Tumor classification: 6 classes (NL/HP/SSP/TSA/TA/TVA-VA)697 casesResNetAccuracy (93.0%)
Yoshida et al[35]Tumor classification1328 colorectal biopsy WSIsMLAccuracy (90.1%, adenoma)
Alom et al[89]Tumor microenvironment analysis: Classification, Segmentation and Detection21135 patchesDCRN/R2U-NetAccuracy (91.1%, classification)
Bychkov et al[42]Prediction of colorectal cancer outcome (5-yr disease-specific survival).420 casesRecurrent neural networksHR of 2.3, AUC (0.69)
Weis et al[90]Evaluation of tumor budding401 casesCNNCorrelation R (0.86)
Ponzio et al[91]Tumor classification: 3 classes (NL/AD/ADC)27 WSIs (13500 patches)VGG16Accuracy (96 %)
Kather et al[34]Tumor classification: 2 classes (NL/Tumor)94 WSIsResNet18AUC (> 0.99)
Kather et al[34]Prediction of microsatellite instability360 TCGA- DX (93408 patches), 378 TCGA- KR (60894 patches)ResNet18AUC: TCGA-DX—(0.77, TCGA-DX; 0.84, TCGA-KR)
Kather et al[26]Tumor microenvironment analysis: classification of 9 cell types86 WSIs (100000)VGG19Accuracy (94%-99%)
Kather et al[26]Prognosis predictions1296 WSIsVGG19Accuracy (94%-99%)
Kather et al[26]Prognosis prediction934 casesDeep learning (comparison of 5 networks)HR for overall survival of 1.99 (training set) and 1.63 (test set)
Geessink et al[29]Prognosis prediction, quantification of intratumoral stroma129 casesNeural networksHRs of 2.04 for disease-free survival
Sena et al[40]Tumor classification: 4 classes (NL/HP/AD/ADC)393 WSIs (12,565 patches)CNNAccuracy (80%)
Shapcott et al[92]Tumor microenvironment analysis: detection and classification853 patches and 142 TCGA imagesCNN with a grid-based attention networkAccuracy (84%, training set; 65%, test set)
Sirinukunwattana et al[31]Prediction of consensus molecular subtypes of colorectal cancer1206 casesNeural networks with domain-adversarial learningAUC (0.84 and 0.95 in the two validation sets)
Swiderska-Chadaj et al[93]Tumor Microenvironment Analysis: Detection of immune cell, CD3+, CD8+28 WSIsFCN/LSM/U-NetSensitivity (74.0%)
Yoon et al[39]Tumor classification: 2 classes (NL/Tumor)57 WSIs (10280 patches)VGGAccuracy (93.5%)
Echle et al[46]Prediction of microsatellite instability8836 casesShuffleNet Deep learningAUC (0.92 in development cohort; 0.96 in validation cohort)
Iizuka et al[22]Tumor classification: 3 classes (NL/AD/ADC)4036 WSIsCNN/RNNAUCs (0.96, ADC; 0.99, AD)
Skrede et al[28]Prognosis predictions2022 casesNeural networks with multiple instance learningHR (3.04 after adjusting for established prognostic markers)
Table 3 Advantages and disadvantages of representative machine-learning methods in the development of artificial intelligence-models for gastrointestinal pathology
AI model
Advantages
Disadvantages
Conventional ML (supervised)User can reflect domain knowledge to featuresRequires hand-crafted features; Accuracy depends heavily on the quality of feature extraction
Conventional ML (unsupervised)Executable without labelsResults are often unstable; Interpretability of the results
Deep neural networks (CNN)Automatic feature extraction; High accuracyRequires a large dataset; Low explainability (Black box)
Multi-instance learningExecutable without detailed labelsRequires a large dataset; High computational cost
Semantic segmentation (FCN, U-Net)Pixel-level detection gives the position, size, and shape of the targetHigh labeling cost
Recurrent neural networksLearn sequential dataHigh computational cost
Generative adversarial networksLearn to synthesize new realistic dataComplexity and instability in training