Minireviews Open Access
Copyright ©The Author(s) 2025. Published by Baishideng Publishing Group Inc. All rights reserved.
Artif Intell Med Imaging. Jun 8, 2025; 6(1): 106928
Published online Jun 8, 2025. doi: 10.35711/aimi.v6.i1.106928
Application of artificial intelligence-assisted confocal laser endomicroscopy in gastrointestinal imaging analysis
Yu-Shun Liu, Ze-Hua Shi, Yan-Rui Jin, Cheng-Liang Liu, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
Cui-Ping Yang, Department of Gastroenterology, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, Shanghai 200025, China
ORCID number: Cui-Ping Yang (0000-0002-7039-0919); Cheng-Liang Liu (0009-0001-7041-7747).
Co-first authors: Yu-Shun Liu and Ze-Hua Shi.
Co-corresponding authors: Cui-Ping Yang and Cheng-Liang Liu.
Author contributions: Liu YS was responsible for the literature search and drafting the initial manuscript; Shi ZH contributed to the drafting of the manuscript and made significant revisions; Jin YR was involved in the conceptualization and revision of the manuscript; Liu YS and Shi ZH contributed equally to this work as co-first authors; Yang CP and Liu CL both jointly contributed to the overall framework design of this manuscript, clarified the writing direction; Liu CL provided important feedback and guidance throughout the writing process; Yang CP carefully reviewed the manuscript drafts in detail; Yang CP and Liu CL are recognized as co-corresponding authors.
Supported by Supported by Interdisciplinary Program of Shanghai Jiao Tong University, No. YG2024 LC01; and National Natural Science Foundation of China, No. 62406190.
Conflict-of-interest statement: All authors declare no competing interests.
Open Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: https://creativecommons.org/Licenses/by-nc/4.0/
Corresponding author: Cui-Ping Yang, Department of Gastroenterology, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, No. 197 Ruijin Er Road, Shanghai 200025, China. yangcuipingsgh@163.com
Received: March 24, 2025
Revised: April 8, 2025
Accepted: April 27, 2025
Published online: June 8, 2025
Processing time: 76 Days and 2.2 Hours

Abstract

Confocal laser endomicroscopy (CLE) has become an indispensable tool in the diagnosis and detection of gastrointestinal (GI) diseases due to its high-resolution and high-contrast imaging capabilities. However, the early-stage imaging changes of gastrointestinal disorders are often subtle, and traditional medical image analysis methods rely heavily on manual interpretation, which is time-consuming, subject to observer variability, and inefficient for accurate lesion identification across large-scale image datasets. With the introduction of artificial intelligence (AI) technologies, AI-driven CLE image analysis systems can automatically extract pathological features and have demonstrated significant clinical value in lesion recognition, classification diagnosis, and malignancy prediction of GI diseases. These systems greatly enhance diagnostic efficiency and early detection capabilities. This review summarizes the applications of AI-assisted CLE in GI diseases, analyzes the limitations of current technologies, and explores future research directions. It is expected that the deep integration of AI and confocal imaging technologies will provide strong support for precision diagnosis and personalized treatment in the field of gastrointestinal disorders.

Key Words: Confocal laser endomicroscopy; Artificial intelligence; Gastrointestinal diseases; Medical image analysis; Early diagnosis

Core Tip: Confocal laser endomicroscopy (CLE) offers real-time imaging with cellular-level resolution and plays an important role in the early diagnosis of gastrointestinal (GI) diseases. However, its full clinical potential is often hindered by the subjectivity of manual interpretation and limitations in diagnostic efficiency. The rapid advancements in deep learning for medical image analysis have enabled the detection of subtle imaging changes that are easily missed by traditional methods. Through multidimensional feature analysis, these technologies can intelligently predict lesion malignancy, providing objective and efficient support for clinical decision-making. This review systematically presents the practical applications of artificial intelligence assisted CLE in the detection and diagnosis of GI diseases and discusses its future prospects in advancing precision medicine.



INTRODUCTION

Confocal laser endomicroscopy (CLE), as a groundbreaking endoscopic imaging technology, has demonstrated unique advantages in the diagnosis of gastrointestinal (GI) diseases in recent years. By scanning samples point by point using laser and employing optical sectioning technology combined with photomultiplier tubes to convert signals, CLE enables high-resolution imaging of tissue structures, allowing precise observation of micro-scale tissue layers[1]. This “optical biopsy” capability has led to its clinical application in various fields such as Barrett's esophagus surveillance and early screening of gastrointestinal cancers, especially offering significant advantages in the early diagnosis of precancerous gastric lesions. Gastric cancer is one of the most life-threatening malignancies globally, often diagnosed at an advanced stage when the best treatment window has passed. According to 2022 statistics, approximately 969000 new gastric cancer cases and around 660000 deaths occurred worldwide, ranking fifth among global cancer incidences and mortality[2]. Compared to traditional white light endoscopy, which is limited in accurately distinguishing normal tissue from early lesions in precancerous gastric conditions, CLE allows precise identification of subtle pathological changes during the pathological progression of gastric cancer by visualizing the gastric mucosal microstructure. These changes include chronic atrophic gastritis (AG), gastric intestinal metaplasia (GIM), and gastric intraepithelial neoplasia. Studies have shown that CLE can sensitively detect pathological features such as reduced gland numbers and dilated glandular pits in the diagnosis of AG by observing morphological changes in the gastric pits, with high sensitivity (83.6%) and specificity (99.6%)[3]. Similarly, in diagnosing GIM, CLE can identify characteristic structures such as goblet cells, columnar absorptive cells, and brush borders, achieving a sensitivity of up to 98.13% and a specificity of 95.33%, significantly outperforming conventional endoscopy (sensitivity 36.88%, specificity 91.59%). Moreover, it can distinguish between complete and incomplete GIM, with a consistency of 0.67 (Kappa value) compared to pathological results, providing a new approach for accurate in vivo assessment of precancerous gastric lesions[4].

However, CLE still faces several limitations in practical clinical application. First, CLE remains insufficient in detecting tiny lesions, especially in the early stages of cancer, where its resolution limitations may lead to missed diagnoses. Meanwhile, CLE image quality can be affected by uneven lighting, changes in tissue structure, and other external factors. Additionally, due to its limited field of view, CLE cannot cover the entire esophageal or gastric cavity during actual use, which may result in incomplete information about lesions. Furthermore, CLE cannot visualize nuclear structures, which are critical for distinguishing the differentiation state of cancers. This lack of cellular-level detail limits its application in cancer diagnosis and prognosis assessment. Meanwhile, the high cost of CLE equipment, the demand for highly specialized operator skills, and the lack of standardized operational protocols and physician training systems pose significant obstacles to its widespread adoption in medical institutions[5,6].

The introduction of artificial intelligence (AI) provides new approaches to overcome these bottlenecks. Given AI’s enormous potential in gastrointestinal medical imaging, the wealth of diagnostic information in radiology, endoscopic, and pathological images makes them ideal for intelligent algorithm analysis[7]. In the diagnosis of Barrett's esophagus, the integration of probe-based CLE (pCLE) with deep learning models significantly improves the accuracy of diagnosing dysplasia and early cancer. Moreover, methods such as confusion matrix and Gradient-weighted Class Activation Mapping have been used to validate model performance, with results showing that AI systems can achieve diagnostic precision comparable to that of human experts in detecting dysplasia[8]. To address the issue of CLE image quality interference in complex environments, Zhou et al[9] proposed a context-aware dynamic filtering network called U-CLENet to reduce noise in CLE images. By developing context-aware kernel estimation and multi-scale dynamic fusion modules, and decoupling them for better feature representation, they effectively removed environmental noise, enhancing image clarity for subsequent analysis. Thus, combining AI models with medical imaging, cyst fluid analysis, and innovative diagnostic tools [such as needle-based CLE (nCLE)] can significantly improve the prediction accuracy of high-risk lesions, reduce missed diagnoses and overtreatment, and provide patients with more accurate and personalized treatment plans[10]. Given the unique advantages of CLE in gastrointestinal disease diagnosis and its current clinical challenges, along with the breakthrough progress of AI in medical image analysis, this review aims to summarize the current status and future prospects of AI-assisted CLE technology in the diagnosis and treatment of GI diseases such as gastric cancer, Barrett's esophagus, and pancreatic cystic lesions. It is hoped that this will provide a theoretical foundation for optimizing early diagnosis strategies for GI diseases and promote the clinical translation and widespread adoption of AI-assisted CLE technologies.

ADVANCES IN THE GASTROINTESTINAL APPLICATIONS OF CONFOCAL ENDOMICROSCOPY
Common imaging diagnostic methods

In the diagnosis and treatment of GI diseases, the selection of imaging modalities requires comprehensive consideration of diagnostic needs, equipment characteristics, and individual patient conditions. Common imaging techniques for GI diseases include ultrasound, computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography-computed tomography (PET-CT), each possessing distinct features and playing crucial roles in clinical practice. Ultrasound is widely used for preliminary screening and bedside evaluation due to its noninvasive nature, convenience, and lack of radiation exposure[11]. It is suitable for dynamically observing lesion size, morphology, and blood flow; however, image quality can be affected by gas interference and equipment limitations[12]. As a mature imaging modality, CT plays an essential role in the diagnosis and assessment of GI diseases by clearly displaying GI structures and lesion characteristics. For example, it can provide information on tumor morphology and vascular distribution in the diagnosis of GI lymphomas[13]. Spectral CT significantly enhances lesion contrast through multi-energy imaging and provides various quantitative parameters, making it suitable for chronic disease patients requiring repeated examinations[14]. MRI offers excellent soft-tissue resolution and multiplanar imaging capabilities, enabling accurate evaluation of spatial relationships between tumors and surrounding structures. It is particularly advantageous for preoperative staging of colorectal cancer; however, the examination process is relatively long, costly, and requires good patient tolerance[15,16]. PET-CT combines metabolic and anatomical imaging, allowing comprehensive assessment of tumor metabolic activity and anatomical localization. It plays a key role in tumor staging, treatment evaluation, and recurrence monitoring, but it is expensive, involves radiation exposure, and has limited sensitivity to low-metabolic lesions[17]. Compared with the aforementioned conventional imaging methods, CLE enables real-time, subcellular-level imaging within living tissues. It offers noninvasive, high-resolution advantages and allows for immediate visualization of microscopic tissue structures during endoscopic procedures. This enhances early lesion detection, reduces reliance on traditional biopsies, and effectively lowers the risk of complications such as bleeding and infection[18].

An overview of the development and application advances of CLE in GI diseases

Confocal microscopy technology was first introduced by Marvin Minsky in 1957. However, in its early stages, due to the large size of the equipment and the relatively slow imaging speed, it was difficult to meet the requirements of real-time clinical diagnosis. With the miniaturization of imaging systems and their integration with endoscopic technologies, CLE has gradually emerged as a key method for performing in vivo “optical biopsies.” In 2004, Kiesslich et al[19] first applied a confocal laser endomicroscopy system for in vivo diagnosis of colorectal cancer. Using fluorescein sodium as a contrast agent during colonoscopy, high-resolution images of the colonic mucosa were obtained and compared with traditional histopathological results. The findings showed that CLE exhibited extremely high sensitivity (97.4%) and specificity (99.4%) in detecting intestinal epithelial neoplasia, establishing for the first time the clinical potential of CLE in diagnosing early intestinal lesions. Building on this foundation, in 2006, Kiesslich et al[20] further expanded the application of CLE to the in vivo identification of Barrett's esophagus and its associated neoplasia. By establishing the “Barrett's classification system” and performing CLE on 63 patients with the aid of fluorescein sodium, the study reported a high diagnostic accuracy of 97.4% for Barrett’s esophagus and related lesions. Additionally, the study evaluated interobserver (κ = 0.843) and intraobserver (κ = 0.892) agreement, demonstrating that the method was not only highly accurate but also highly reproducible. In the same year, Kitabatake et al[21] applied CLE to the clinical identification of gastric cancer. In a study of 27 patients with early gastric cancer, fluorescein sodium was used to successfully obtain high-quality images of cancerous areas. These images were blindly evaluated by two pathologists, achieving diagnostic accuracies of 94.2% and 96.2%, respectively, thereby confirming the clinical value of CLE in the integrated diagnosis of gastric cancer by combining real-time imaging and histology. To meet the demands of different clinical scenarios, CLE systems have evolved into multiple forms. Based on their structural design and application features, CLE systems are classified into two main types: Integrated systems and probe-based systems. Integrated systems incorporate the laser emission and signal detection modules directly into the distal end of the endoscope, offering significant advantages in imaging quality and depth of penetration. Probe-based systems adopt a miniaturized design and use flexible probes to allow more maneuverable operation, performing better in terms of scanning speed and application versatility. The latter can be further divided into standard probe-type and ultra-thin needle-type variants to meet the microscopic observation needs of different anatomical regions, collectively promoting the widespread clinical application of confocal endomicroscopy technologies[22].

In recent years, with advances in probe miniaturization and optimization of contrast agents, the operability of CLE in narrow lumens has significantly improved, demonstrating unique advantages in areas such as the respiratory system[23], bladder cancer diagnosis[24], assessment of ulcerative colitis (UC), and detection of pancreatic cystic lesions. In evaluating UC, CLE effectively detects crypt architecture alterations associated with disease severity. Especially in patients with endoscopically normal mucosa, CLE can reveal potential pathological activity by identifying microvascular abnormalities and fluorescein leakage, further validating its crucial role in detecting subtle lesions[25]. Simultaneously, combining pCLE with magnifying chromoendoscopy and narrow-band imaging has been proven effective in distinguishing UC-associated dysplasia, isolated adenomas, and non-neoplastic regenerative lesions. By observing back-to-back crypt arrangements and dark trabecular structures, this method achieved 100% sensitivity, 83% specificity, and 92% overall accuracy in distinguishing cancer or dysplasia[26]. These findings further support the significant clinical value of CLE in evaluating UC-related pathological changes and early tumor diagnosis. In detecting pancreatic cystic lesions such as intraductal papillary mucinous neoplasms (IPMNs), the combination of CLE and endoscopic ultrasonography (EUS) has led to the development of EUS-guided needle-based confocal laser endomicroscopy (EUS-nCLE). This technique effectively distinguishes different types of pancreatic cystic lesions and identifies high-risk lesions, offering new possibilities for the early detection of pancreatic cancer[27]. In another similar study, EUS-nCLE was performed on patients with pancreatic cystic tumors larger than 2 cm. The results showed a 90% success rate in image acquisition and accurate differentiation between mucinous and non-mucinous cysts. For the diagnosis of mucinous cysts, EUS-nCLE achieved an accuracy of 80% with no adverse events observed, confirming the feasibility and safety of this technique in identifying types of pancreatic cystic lesions and detecting high-risk changes[28].

Subsequently, the introduction of artificial intelligence-assisted diagnostic systems has significantly improved the efficiency and accuracy of image analysis. Meanwhile, the development of molecular imaging probes offers new methods for tumor-targeted identification and margin delineation. These technological advancements are collectively transforming CLE from a purely diagnostic tool into a multifunctional platform integrating diagnosis, therapy, and prognostic evaluation.

RESEARCH ADVANCES AND APPLICATIONS OF DEEP LEARNING IN MEDICAL IMAGING ANALYSIS

The rapid development of deep learning technology has propelled medical image analysis from traditional machine learning methods to end-to-end automatic feature learning. Early medical image analysis mainly relied on traditional machine learning methods such as random forests[29], which required complex manual feature extraction processes. The successful application of convolutional neural networks (CNNs) in image processing ushered medical image analysis into a new stage of development. AlexNet[30], as a landmark network architecture, not only demonstrated the feasibility of large-scale deep learning models but also pioneered a new paradigm for medical image classification through GPU-accelerated training. Subsequently, ResNet[31] addressed the challenges of training deep networks by introducing residual connections, significantly enhancing model performance and laying an important foundation for subsequent medical image analysis.

In practical applications of medical image analysis, different tasks require different technical approaches. For image classification tasks, Vision Transformer[32] has surpassed traditional CNNs in performance on multiple medical image classification benchmarks by segmenting images and applying self-attention mechanisms. Object detection tasks focus more on precise localization of lesions; algorithms like Faster R-CNN[33] and YOLO[34] have optimized detection processes to achieve real-time detection capabilities while maintaining accuracy. In the most challenging field of image segmentation, U-Net[35], with its unique encoder-decoder structure, has become the foundational architecture for medical image segmentation. Subsequent models like DeepLab[36] have further improved segmentation accuracy by introducing atrous convolution and multi-scale pooling, promoting the trend of diversification in medical image analysis.

In recent years, medical image analysis technology has exhibited a clear trend towards cross-modality and generalization. Given the diversity of medical image modalities and the high cost of annotations, researchers have developed a series of innovative solutions. SAM-Med3D[37] has significantly enhanced the model’s generalization ability across anatomical structures and modalities by constructing a large-scale dataset containing 140000 cases and employing a two-stage training strategy. Diffusion models[38], with their unique progressive denoising process and powerful generative capabilities, have shown significant advantages in medical image analysis. In specific applications of diffusion models, DiffMIC[39] innovatively integrates multi-granularity information through a dual-conditioning guidance strategy in tasks such as placental maturity grading, skin lesion classification, and diabetic retinopathy grading; the DCE-diff model[40] has made a breakthrough by combining non-contrast structural MRI sequences with apparent diffusion coefficient maps to synthesize dynamic contrast-enhanced MRI images, laying the foundation for further applications of diffusion models in medical image analysis.

Addressing challenges such as insufficient training samples, scarce annotated data, and small sample sizes, the combination of self-supervised learning and generative mechanisms has become a focal point of recent research. Graikos et al[41] proposed a method that guides diffusion models through self-supervised learning, reducing the need for manual annotations while improving the quality of generated images and the accuracy of downstream tasks. In data augmentation, Mekala et al[42] utilized GAN to synthesize high-quality skin lesion images, effectively alleviating the problem of scarce medical image data and significantly enhancing the performance of classification models. In coordinating the relationship between local fine structures and global semantic information in medical image analysis, traditional methods often struggle with issues such as blurred organ boundaries or lesion heterogeneity. Overemphasis on local details can lead to segmentation results lacking anatomical consistency, while focusing too much on global features may miss subtle lesions. To address this challenge, LoG-VMamba[43] has achieved groundbreaking progress in 2D and 3D medical image segmentation tasks by explicitly maintaining spatial adjacency and global context compression. LKM-UNet[44] innovatively combines large-kernel Mamba models with U-Net structures to effectively model long-range dependencies at a relatively low computational cost. These new architectures surpass the limitations of traditional CNNs and Transformers, achieving a better balance between local and global features in medical imaging tasks.

To enhance the practical value of AI-assisted diagnostic systems in real-world medical scenarios, future developments in intelligent medical image model will exhibit the following key trends: The generalization ability of models across modalities and centers will become a research focus, with the development of adaptive feature extraction and domain adaptation techniques to address the generalization bottleneck caused by data distribution differences among different medical institutions; given the high cost of medical data annotation, efficient learning paradigms based on self-supervised and semi-supervised learning for small samples will be further explored; improving the interpretability of model decision processes and optimizing deployment efficiency will become crucial bridges connecting algorithm research and clinical practical applications.

ADVANCES IN AI-ASSISTED CLE IN GI DISEASES

In recent years, the integration of AI technologies with CLE has provided new approaches for the early diagnosis and therapeutic decision-making of gastrointestinal diseases. In clinical practice, AI-assisted CLE systems have enabled real-time assessment of lesions in the esophagus, stomach, and colorectal regions. This section highlights recent advancements in the application of AI-assisted CLE in the diagnosis of pancreatic cystic lesion (PCL), Barrett's esophagus, gastric cancer, and colorectal tumors. Table 1 demonstrates the advancements in the application of AI-assisted CLE in the diagnosis of GI diseases.

Table 1 Summary of research progress on artificial intelligence-assisted confocal laser endomicroscopy in gastrointestinal medical imaging.
Ref.
Purpose
Model
Input
Datasets
Experimental results
Machicado et al[45]Develop two CNN-based algorithms for EUS-nCLE image analysis to assist in the accurate diagnosis and risk stratification of IPMNsSBM, HBMEUS-nCLE video frames with region of interest segmentation and feature extractionEUS-nCLE video images consisting of 15027 frames were used from 35 histologically confirmed IPMN patients, including 18 cases of HGD-CaSensitivity: SBM and HBM: 83.3%; accuracy: SBM: 82.9%, HBM: 85.7%; specificity: SBM: 82.4%, HBM: 88.2%
Lee et al[46]Develop a deep learning-based computer-aided diagnostic system to support EUS-nCLE in classifying pancreatic cystic lesionsVGG19224 × 224 nCLE video frame images after data augmentation68 nCLE video clips collected from King Chulalongkorn Memorial HospitalClassification accuracy of nonmucinous PCLs: pseudocysts (98%), SCN (93.11%), NET (84.31%); classification accuracy of mucinous PCLs: IPMN (84.43%), MCN (86.1%)
André et al[47]Design pCLE diagnostic software for the automated classification of colonic polyps to support lesion identification during examinationsKNNFeature vector data135 colorectal lesion samples from 71 patients, including 93 neoplastic lesions and 42 non-neoplastic lesionsAccuracy: 89.6%; sensitivity: 92.5%; specificity: 83.3%
Gessert et al[48]CNNs and multiple transfer learning strategies were utilized to perform real-time classification of colorectal cancer in both peritoneal and colonic tissues, aiming to provide auxiliary decision support during surgeryVGG-16, Inception-V3, Densenet121, SE-Resnext50384 × 384 imagesThe dataset consisted of 1577 CLE images categorized into four classes: Healthy colon, malignant colon, healthy peritoneum, and malignant peritoneumPeritoneal metastases: 97.1 AUC; primary colorectal lesions: 73.1 AUC
Pulido et al[49]Enhancing the classification accuracy of pCLE videos for Barrett's esophagus and related lesions using deep learning modelsAttentionPooling, Multi-Module AttentionPooling224 × 224 images after removing irrelevant partspCLE videos from 78 patients, totaling 1057 clips, annotated into 3 categoriesThe AttentionPooling model performed the best, with an F1 score of 0.89; the Multi-Module AttentionPooling model exhibited the highest sensitivity for precancerous lesions (0.71)
Tong et al[50]Design the CAESNet model to leverage a large number of unlabeled eCLE images for improving the accuracy of dysplasia grading diagnosis in Barrett's esophagusCAESNet256 × 256 images after data augmentationAn eCLE image dataset comprising 429 expert-annotated images (across 9 categories) and 2826 unannotated imagesBest performance at 32 layers: Accuracy: 0.824 ± 0.0329; precision: 0.832 ± 0.0302; F1 score: 0.816 ± 0.0342; Cohen's kappa coefficient: 0.781 ± 0.04
Su et al[51]Apply deep learning techniques to the field of CLE for semantic segmentation of goblet cells in gastric mucosal intestinal metaplasiaGCSCLE512 × 512 × 3 image data334 clinical CLE gastric images from 62 subjects, covering different regions of the stomachIoU: 0.8795; dice coefficient: 0.8664; precision: 0.8554; recall: 0.8834; accuracy: 0.9925
Cho et al[52]Develop an AI-based real-time assessment system for the automatic detection of gastric cancer in CLE imagesEfficientNetV2Data-augmented grey CLE imagesTraining set: 5984 tumor images and 5984 normal images; Testing set: 1496 tumor images and 2586 normal imagesAccuracy: 0.990; specificity: 0.982; sensitivity: 1.000

In the domain of AI-assisted diagnosis of pancreatic cystic lesions, Machicado et al[45] developed two CNN models based on EUS-nCLE images to support risk stratification of IPMNs. By analyzing pre-processed video frames, they proposed a guided segmentation model and a relatively independent global feature extraction model. The results demonstrated high sensitivity and accuracy of these AI algorithms in identifying high-grade dysplasia, outperforming current guideline-based standards and providing essential reference for the real-time, automated diagnosis of IPMNs. In further efforts to identify PCL subtypes, Lee et al[46] developed a CNN-based diagnostic system for classifying and diagnosing PCL using nCLE video image datasets. The system achieved high classification accuracy across five PCL subtypes, demonstrating that CNN-enhanced computer-aided diagnostic systems not only improve diagnostic precision but also significantly reduce expert interpretation time, showing promising potential for clinical deployment.

In AI-assisted diagnosis of lower gastrointestinal diseases, André et al[47] designed an AI-based automatic classification software capable of distinguishing benign from malignant polyps in pCLE videos. The results indicated that the software achieved an accuracy of 89.6%, a sensitivity of 92.5%, and a specificity of 83.3% in diagnosing colonic polyps—comparable to the offline assessments of two experienced endoscopists. Moreover, the tool provided interpretable outputs, offering a novel approach to enhance the precision of colonic polyp diagnosis. Subsequently, Gessert et al[48] investigated peritoneal metastases from colorectal cancer using VGG-16, Inception-V3, Densenet121, and SE-Resnext50, along with various transfer learning strategies (retraining classifiers, partial freezing, fine-tuning, and training from scratch). Their evaluation employed cross-validation and multiple metrics including area under the curve (AUC), receiver operating characteristic, accuracy, sensitivity, specificity, and F1 score. The system achieved an AUC of 97.1 for detecting peritoneal metastases and 73.1 for primary colorectal lesions, providing robust support for real-time diagnosis and intraoperative decision-making.

Pulido et al[49] applied a deep learning framework to the diagnosis of Barrett's esophagus and its precancerous lesions. By introducing two novel architectures—AttentionPooling and Multi-Module AttentionPooling deep networks—they significantly improved classification performance and enhanced model interpretability. These developments are particularly valuable for optimizing current biopsy protocols, especially in improving sensitivity to dysplasia. Tong et al[50] proposed a Convolutional AutoEncoder based Semi-supervised Network (CAESNet) to improve multi-class classification of endoscope-based CLE (eCLE) images, focusing on differentiating various forms of Barrett's esophagus and associated dysplastic changes. By leveraging both labeled and a large number of unlabeled images, CAESNet effectively extracted features and improved classification performance, offering valuable support for real-time clinical decision-making.

In the detection of gastric cancer and its precancerous conditions, Su et al[51] addressed the issues of low segmentation accuracy and labor inefficiency in identifying goblet cells (GCs) in CLE images. They constructed a GC segmentation dataset based on GIM and proposed GCSCLE, an automatic GC segmentation method based on an improved U-Net architecture. Experimental results showed that GCSCLE achieved an IoU of 87.95% and a Dice score of 86.64% on their in-house dataset, outperforming mainstream segmentation models and approaching the performance of manual annotation. This demonstrated the potential of deep learning in assisting GIM diagnosis and provided a novel technological pathway for early detection and treatment of gastric cancer. Cho et al[52] further developed a real-time AI-based CLE image analysis system for gastric cancer, capable of automatic tumor cell detection. The model exhibited excellent performance in tumor detection and histological subtype classification, offering an efficient and accurate solution for real-time gastric cancer diagnosis.

In summary, the synergistic innovation of AI and CLE has significantly enhanced the diagnostic performance for gastrointestinal tumors and their precancerous lesions, enabling comprehensive optimization from microscopic mucosal structure observation to clinical decision support. Such intelligent diagnostic systems not only provide objective and reproducible diagnostic evidence but also effectively shorten the diagnostic window, gaining critical time for clinical interventions. The current technological developments already encompass key aspects such as lesion detection, risk stratification, and treatment planning, forming a complete intelligent diagnostic and therapeutic workflow.

CONCLUSION

The combination of CLE with powerful AI feature extraction capabilities has overcome the subjectivity and efficiency limitations of traditional manual interpretation, achieving comprehensive optimization from microscopic mucosal structure observation to clinical decision support. Current research has covered key areas including pancreatic cystic lesions, Barrett's esophagus, gastric cancer, and colorectal tumors, demonstrating accuracy comparable to experts in lesion detection, risk stratification, and classification diagnosis, even surpassing conventional methods in certain scenarios.

In the future, the development of AI-CLE diagnostic systems is expected to follow several important directions: In terms of technological integration, the deep fusion of molecular imaging technologies with edge computing architectures will be key to overcoming current technical bottlenecks. It will be especially important to develop innovative deep learning models that enable the construction of intelligent diagnostic frameworks with strong generalizability. A major research focus will be on building feature learning spaces capable of adapting to the diverse characteristics of targeted molecular probes, as well as designing high-sensitivity recognition algorithms for specific biomarkers such as MUC2; at the level of clinical translation, optimizing edge-intelligent deployment strategies will be critical. The core requirement for real-world application lies in achieving real-time, reliable bedside diagnostics through the coordinated design of lightweight network architectures and dynamic quality control mechanisms; meanwhile, advances in multimodal imaging fusion technologies will significantly enhance diagnostic performance. Research into cross-scale feature alignment methods will facilitate the effective integration of CLE's microscopic imaging capabilities with the macroscopic structural information from other imaging modalities, enabling the construction of comprehensive and clinically interpretable lesion assessment systems. Key to achieving this will be solving the spatial registration challenges posed by differing image resolutions and establishing robust feature correlation models; Lastly, CLE’s real-time imaging capability offers unique advantages for dynamic pathological analysis. By continuously capturing and analyzing the temporal evolution of mucosal microstructures, deep learning-based models for predicting disease progression may serve as powerful dynamic assessment tools to support precision medicine.

Footnotes

Provenance and peer review: Invited article; Externally peer reviewed.

Peer-review model: Single blind

Specialty type: Computer science, artificial intelligence

Country of origin: China

Peer-review report’s classification

Scientific Quality: Grade C, Grade D

Novelty: Grade B, Grade C

Creativity or Innovation: Grade B, Grade C

Scientific Significance: Grade B, Grade B

P-Reviewer: Tang Y S-Editor: Liu JH L-Editor: A P-Editor: Wang WB

References
1.  Ascencio M, Collinet P, Cosson M, Mordon S. [The place of confocal microscopy in gynecology]. J Gynecol Obstet Biol Reprod (Paris). 2008;37:64-71.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 6]  [Cited by in RCA: 4]  [Article Influence: 0.2]  [Reference Citation Analysis (0)]
2.  Yao YF, Sun KX, Zheng RS. [Interpretation and analysis of the Global Cancer Statistics Report 2022: a comparison between China and the world]. Zhongguo Puwai Jichu Yu Linchuang Zazhi. 2024;31:769-780.  [PubMed]  [DOI]  [Full Text]
3.  Zhang JN, Li YQ, Zhao YA, Yu T, Zhang JP, Guo YT, Liu H. Classification of gastric pit patterns by confocal endomicroscopy. Gastrointest Endosc. 2008;67:843-853.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 63]  [Cited by in RCA: 73]  [Article Influence: 4.3]  [Reference Citation Analysis (0)]
4.  Guo YT, Li YQ, Yu T, Zhang TG, Zhang JN, Liu H, Liu FG, Xie XJ, Zhu Q, Zhao YA. Diagnosis of gastric intestinal metaplasia with confocal laser endomicroscopy in vivo: a prospective study. Endoscopy. 2008;40:547-553.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 77]  [Cited by in RCA: 77]  [Article Influence: 4.5]  [Reference Citation Analysis (0)]
5.  Li YQ, Li CQ. [Application of Confocal Endomicroscopy in the Diagnosis of Gastrointestinal Tumors]. Zhonghua Xiaohuabing Yu Yingxiang Zazhi1:1-4.  [PubMed]  [DOI]  [Full Text]
6.  Han W, Kong R, Wang N, Bao W, Mao X, Lu J. Confocal Laser Endomicroscopy for Detection of Early Upper Gastrointestinal Cancer. Cancers (Basel). 2023;15:776.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 11]  [Reference Citation Analysis (0)]
7.  Berbís MA, Aneiros-Fernández J, Mendoza Olivares FJ, Nava E, Luna A. Role of artificial intelligence in multidisciplinary imaging diagnosis of gastrointestinal diseases. World J Gastroenterol. 2021;27:4395-4412.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in CrossRef: 8]  [Cited by in RCA: 15]  [Article Influence: 3.8]  [Reference Citation Analysis (71)]
8.  Guleria S, Shah TU, Pulido JV, Fasullo M, Ehsan L, Lippman R, Sali R, Mutha P, Cheng L, Brown DE, Syed S. Deep learning systems detect dysplasia with human-like accuracy using histopathology and probe-based confocal laser endomicroscopy. Sci Rep. 2021;11:5086.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 11]  [Cited by in RCA: 12]  [Article Influence: 3.0]  [Reference Citation Analysis (0)]
9.  Zhou J, Dong X, Liu Q. Context-aware dynamic filtering network for confocal laser endomicroscopy image denoising. Phys Med Biol. 2023;68:195014.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
10.  Kaimakliotis P, Riff B, Pourmand K, Chandrasekhara V, Furth EE, Siegelman ES, Drebin J, Vollmer CM, Kochman ML, Ginsberg GG, Ahmad NA. Sendai and Fukuoka Consensus Guidelines Identify Advanced Neoplasia in Patients With Suspected Mucinous Cystic Neoplasms of the Pancreas. Clin Gastroenterol Hepatol. 2015;13:1808-1815.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 47]  [Cited by in RCA: 54]  [Article Influence: 5.4]  [Reference Citation Analysis (0)]
11.  Wang CY, Feng J, Duan XL. [Comparison on Diagnostic Efficacy Between Color Doppler High-frequency Ultrasound and MSCT in the Diagnosis of Gastrointestinal Lymphoma]. Zhongguo CT He MRI Zazhi. 2020;18:144-146.  [PubMed]  [DOI]  [Full Text]
12.  Wang FF, Pang L, Shi XW. [Comparative analysis of ultrasound and CT in diagnosis of acute and chronic appendicitis in children]. Yingxiang Yanjiu Yu Yixue Yingyong. 2020;4:32-33.  [PubMed]  [DOI]
13.  Hu XM, Sun XH. [Research progress on imaging diagnosis of primary gastrointestinal lymphoma]. Zhongguo Zhongliu Linchuang Yu Kangfu. 2023;30:321-326.  [PubMed]  [DOI]  [Full Text]
14.  Xie LX, Zeng HJ, Li ZY, Liu XJ, Shuai T, Huang ZX, Wu B, Song B. [Application of spectral CT in diagnosis of gastrointestinal diseases]. Zhongguo Puwai Jichu Yu Linchuang Zazhi. 2023;30:1313-1318.  [PubMed]  [DOI]  [Full Text]
15.  Qian Y. [Research Advance of Gastrointestinal Stromal Tunors in Imaging]. Xiandai Yixue Yingxiangxue. 2024;33:2165-2168.  [PubMed]  [DOI]
16.  Rollvén E, Holm T, Glimelius B, Lörinc E, Blomqvist L. Potentials of high resolution magnetic resonance imaging versus computed tomography for preoperative local staging of colon cancer. Acta Radiol. 2013;54:722-730.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 35]  [Cited by in RCA: 45]  [Article Influence: 3.8]  [Reference Citation Analysis (0)]
17.  Zhai WH, He W. [Predictive value of 18F-FDG PET/CT quantization parameters SUVpeak, MTV and TLG in patients with diffuse large B-cell lymphoma]. Fenzi Yingxiangxue Zazhi. 2021;44:787-791.  [PubMed]  [DOI]  [Full Text]
18.  Dhali A, Maity R, Rathna RB, Biswas J. Confocal laser endomicroscopy for gastric neoplasm. World J Gastrointest Endosc. 2024;16:540-544.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 2]  [Reference Citation Analysis (0)]
19.  Kiesslich R, Burg J, Vieth M, Gnaendiger J, Enders M, Delaney P, Polglase A, McLaren W, Janell D, Thomas S, Nafe B, Galle PR, Neurath MF. Confocal laser endoscopy for diagnosing intraepithelial neoplasias and colorectal cancer in vivo. Gastroenterology. 2004;127:706-713.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 619]  [Cited by in RCA: 559]  [Article Influence: 26.6]  [Reference Citation Analysis (0)]
20.  Kiesslich R, Gossner L, Goetz M, Dahlmann A, Vieth M, Stolte M, Hoffman A, Jung M, Nafe B, Galle PR, Neurath MF. In vivo histology of Barrett's esophagus and associated neoplasia by confocal laser endomicroscopy. Clin Gastroenterol Hepatol. 2006;4:979-987.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 362]  [Cited by in RCA: 319]  [Article Influence: 16.8]  [Reference Citation Analysis (0)]
21.  Kitabatake S, Niwa Y, Miyahara R, Ohashi A, Matsuura T, Iguchi Y, Shimoyama Y, Nagasaka T, Maeda O, Ando T, Ohmiya N, Itoh A, Hirooka Y, Goto H. Confocal endomicroscopy for the diagnosis of gastric cancer in vivo. Endoscopy. 2006;38:1110-1114.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 97]  [Cited by in RCA: 92]  [Article Influence: 4.8]  [Reference Citation Analysis (0)]
22.  Feng JX, Zhou ZQ, Li SY. [Application and research progress of confocal laser endomicroscopy in pulmonary diseases]. Zhonghua Jie He He Hu Xi Za Zhi. 2021;44:260-266.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
23.  Liu XY, Song XL. [Application of confocal laser endomicroscopy in respiratory diseases]. Zhonghua Jie He He Hu Xi Za Zhi. 2019;42:125-128.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
24.  Wu J, Wang YC, Dai B, Ye DW, Zhu YP. Optical biopsy of bladder cancer using confocal laser endomicroscopy. Int Urol Nephrol. 2019;51:1473-1479.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 5]  [Cited by in RCA: 11]  [Article Influence: 1.8]  [Reference Citation Analysis (0)]
25.  Maione F, Giglio MC, Luglio G, Rispo A, D'Armiento M, Manzo B, Cassese G, Schettino P, Gennarelli N, Siciliano S, D'Armiento FP, De Palma GD. Confocal laser endomicroscopy in ulcerative colitis: beyond endoscopic assessment of disease activity. Tech Coloproctol. 2017;21:531-540.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 9]  [Cited by in RCA: 9]  [Article Influence: 1.1]  [Reference Citation Analysis (0)]
26.  Ohmiya N, Horiguchi N, Tahara T, Yoshida D, Yamada H, Nagasaka M, Nakagawa Y, Shibata T, Tsukamoto T, Kuroda M. Usefulness of confocal laser endomicroscopy to diagnose ulcerative colitis-associated neoplasia. Dig Endosc. 2017;29:626-633.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 8]  [Cited by in RCA: 10]  [Article Influence: 1.3]  [Reference Citation Analysis (0)]
27.  Krishna S, Abdelbaki A, Hart PA, Machicado JD. Endoscopic Ultrasound-Guided Needle-Based Confocal Endomicroscopy as a Diagnostic Imaging Biomarker for Intraductal Papillary Mucinous Neoplasms. Cancers (Basel). 2024;16:1238.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 3]  [Reference Citation Analysis (0)]
28.  Kadayifci A, Atar M, Basar O, Forcione DG, Brugge WR. Needle-Based Confocal Laser Endomicroscopy for Evaluation of Cystic Neoplasms of the Pancreas. Dig Dis Sci. 2017;62:1346-1353.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 30]  [Cited by in RCA: 35]  [Article Influence: 4.4]  [Reference Citation Analysis (0)]
29.  Breiman L. Random Forests. Machine Learning. 2001;45:5-32.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 56052]  [Cited by in RCA: 33529]  [Article Influence: 2794.1]  [Reference Citation Analysis (0)]
30.  Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Commun ACM. 2017;60:84-90.  [PubMed]  [DOI]  [Full Text]
31.  He K, Zhang X, Ren S, Sun J.   Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 72655]  [Cited by in RCA: 21961]  [Article Influence: 2440.1]  [Reference Citation Analysis (0)]
32.  Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai XH, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, Uszkoreit J, Houlsby N.   An image is worth 16x16 words: Transformers for image recognition at scale. ICLR 2021. 9th International Conference on Learning Representations; 2021 May 3-7; Virtual, Online. ICLR, 2021.  [PubMed]  [DOI]
33.  Ren S, He K, Girshick R, Sun J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans Pattern Anal Mach Intell. 2017;39:1137-1149.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 14896]  [Cited by in RCA: 5338]  [Article Influence: 667.3]  [Reference Citation Analysis (0)]
34.  Redmon J, Divvala S, Girshick R, Farhadi A.   You Only Look Once: Unified, Real-Time Object Detection. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.  [PubMed]  [DOI]  [Full Text]
35.  Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. LNCS. 2015;234-241.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 13000]  [Cited by in RCA: 9676]  [Article Influence: 967.6]  [Reference Citation Analysis (0)]
36.  Chen LC, Papandreou G, Kokkinos I, Murphy K, Yuille AL. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans Pattern Anal Mach Intell. 2018;40:834-848.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 8836]  [Cited by in RCA: 3695]  [Article Influence: 527.9]  [Reference Citation Analysis (0)]
37.  Wang HY, Guo SZ, Ye J, Deng ZY, Cheng JL, Li TB, Chen JP, Su YZ, Huang ZY, Shen YQ, Fu B, Zhang ST, He JJ, Qiao Y. SAM-Med3D: Towards General-purpose Segmentation Models for Volumetric Medical Images. arXiv.  2023.  [PubMed]  [DOI]  [Full Text]
38.  Ho J, Jain A, Abbeel P.   Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems. 34th Conference on Neural Information Processing Systems; 2020 Dec 6-12; Virtual, Online. Neural information processing systems foundation, 2020.  [PubMed]  [DOI]
39.  Yang Y, Fu H, Aviles-rivero AI, Schönlieb C, Zhu L. DiffMIC: Dual-Guidance Diffusion Network for Medical Image Classification. LNCS.  2023.  [PubMed]  [DOI]  [Full Text]
40.  M KK, Ramanarayanan S, S S, Sarkar A, Gayathri MN, Ram K, Sivaprakasam M.   DCE-diff: Diffusion Model for Synthesis of Early and Late Dynamic Contrast-Enhanced MR Images from Non-Contrast Multimodal Inputs. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2024.  [PubMed]  [DOI]  [Full Text]
41.  Graikos A, Yellapragada S, Le MQ, Kapse S, Prasanna P, Saltz J, Samaras D. Learned representation-guided diffusion models for large-image generation. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2024;2024:8532-8542.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 2]  [Cited by in RCA: 2]  [Article Influence: 2.0]  [Reference Citation Analysis (0)]
42.  Mekala RR, Pahde F, Baur S, Chandrashekar S, Diep M, Wenzel M, Wisotzky EL, Yolcu GU, Lapuschkin S, Ma J, Eisert P, Lindvall M, Porter A, Samek W. Synthetic Generation of Dermatoscopic Images with GAN and Closed-Form Factorization. arXiv.  2024.  [PubMed]  [DOI]  [Full Text]
43.  Dang TD, Nguyen HH, Tiulpin A. LoG-VMamba: Local-Global Vision Mamba for Medical Image Segmentation. LNCS. 2025;15473:222-240.  [PubMed]  [DOI]  [Full Text]
44.  Wang JH, Chen JT, Chen D, Wu J. LKM-UNet: Large Kernel Vision Mamba UNet for Medical Image Segmentation, 2024. arXiv. 2024;15008:360-370.  [PubMed]  [DOI]  [Full Text]
45.  Machicado JD, Chao WL, Carlyn DE, Pan TY, Poland S, Alexander VL, Maloof TG, Dubay K, Ueltschi O, Middendorf DM, Jajeh MO, Vishwanath AB, Porter K, Hart PA, Papachristou GI, Cruz-Monserrate Z, Conwell DL, Krishna SG. High performance in risk stratification of intraductal papillary mucinous neoplasms by confocal laser endomicroscopy image analysis with convolutional neural networks (with video). Gastrointest Endosc. 2021;94:78-87.e2.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 59]  [Cited by in RCA: 54]  [Article Influence: 13.5]  [Reference Citation Analysis (0)]
46.  Lee TC, Angelina CL, Kongkam P, Wang HP, Rerknimitr R, Han ML, Chang HT. Deep-Learning-Enabled Computer-Aided Diagnosis in the Classification of Pancreatic Cystic Lesions on Confocal Laser Endomicroscopy. Diagnostics (Basel). 2023;13.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 6]  [Reference Citation Analysis (0)]
47.  André B, Vercauteren T, Buchner AM, Krishna M, Ayache N, Wallace MB. Software for automated classification of probe-based confocal laser endomicroscopy videos of colorectal polyps. World J Gastroenterol. 2012;18:5560-5569.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in CrossRef: 64]  [Cited by in RCA: 59]  [Article Influence: 4.5]  [Reference Citation Analysis (0)]
48.  Gessert N, Bengs M, Wittig L, Drömann D, Keck T, Schlaefer A, Ellebrecht DB. Deep transfer learning methods for colon cancer classification in confocal laser microscopy images. Int J Comput Assist Radiol Surg. 2019;14:1837-1845.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 14]  [Cited by in RCA: 16]  [Article Influence: 2.7]  [Reference Citation Analysis (0)]
49.  Pulido JV, Guleria S, Ehsan L, Shah T, Syed S, Brown DE. SCREENING FOR BARRETT'S ESOPHAGUS WITH PROBE-BASED CONFOCAL LASER ENDOMICROSCOPY VIDEOS. Proc IEEE Int Symp Biomed Imaging. 2020;2020:1659-1663.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 3]  [Cited by in RCA: 7]  [Article Influence: 1.4]  [Reference Citation Analysis (0)]
50.  Tong L, Wu H, Wang MD. CAESNet: Convolutional AutoEncoder based Semi-supervised Network for improving multiclass classification of endomicroscopic images. J Am Med Inform Assoc. 2019;26:1286-1296.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 10]  [Cited by in RCA: 8]  [Article Influence: 1.3]  [Reference Citation Analysis (0)]
51.  Su D, Zheng X, Wang S, Qi Q, Li Z. Goblet cells segmentation from confocal laser endomicroscopy with an improved U-Net. Biomed Phys Eng Express. 2023;9.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 3]  [Reference Citation Analysis (0)]
52.  Cho H, Moon D, Heo SM, Chu J, Bae H, Choi S, Lee Y, Kim D, Jo Y, Kim K, Hwang K, Lee D, Choi HK, Kim S. Artificial intelligence-based real-time histopathology of gastric cancer using confocal laser endomicroscopy. NPJ Precis Oncol. 2024;8:131.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 3]  [Reference Citation Analysis (0)]