1
|
Sui J, Luo JS, Xiong C, Tang CY, Peng YH, Zhou R. Bibliometric analysis on the top one hundred cited studies on gastrointestinal endoscopy. World J Gastrointest Endosc 2025; 17:100219. [PMID: 39850908 PMCID: PMC11752471 DOI: 10.4253/wjge.v17.i1.100219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/10/2024] [Revised: 11/24/2024] [Accepted: 12/23/2024] [Indexed: 01/16/2025] Open
Abstract
BACKGROUND Gastrointestinal endoscopy has been widely used in the diagnosis and treatment of gastrointestinal diseases. A great many of studies on gastrointestinal endoscopy have been done. AIM To analyze the characteristics of top 100 cited articles on gastrointestinal endoscopy. METHODS A bibliometric analysis was conducted. The publications and their features were extracted from the Web of Science Core Collection, Science Citation Index-Expanded database. Excel, Web of Science database and SPSS software were used to perform the statistical description and analysis. VOSviewer software and MapChart were responsible for the visualizations. RESULTS The top 100 cited articles were published between 1976 and 2022. The guidelines (52%) and clinical trials (37%) are the main article types, and average publication year of the guidelines is much later than that of the clinical trials (2015 vs 1998). Among the clinical trials, diagnostic study (27.0%), cohort study (21.6%), case series (13.5%) and cross-sectional study (10.8%) account for a large proportion. Average citations of different study types and designs of the enrolled studies are of no significant differences. Most of the 100 articles were published by European authors and recorded by the endoscopic journals (65%). Top journals in medicine, such as the Lancet, New England Journal of Medicine and JAMA, also reported studies in this field. The hot spots of involved diseases include neoplasm or cancer-related diseases, inflammatory diseases, obstructive diseases, gastrointestinal hemorrhage and ulcer. Endoscopic surgery, endoscopic therapy and stent placement are frequently studied. CONCLUSION Our research contributes to delineating the field and identifying the characteristics of the most highly cited articles. It is noteworthy that there is a significantly smaller number of clinical trials included compared to guidelines, indicating potential areas for future high-quality clinical trials.
Collapse
Affiliation(s)
- Jing Sui
- Department of Anesthesiology, Deyang People’s Hospital, Deyang 618000, Sichuan Province, China
| | - Jian-Sheng Luo
- Department of Anesthesiology, Deyang People’s Hospital, Deyang 618000, Sichuan Province, China
| | - Chao Xiong
- Department of Anesthesiology, Deyang People’s Hospital, Deyang 618000, Sichuan Province, China
| | - Chun-Yong Tang
- Department of Anesthesiology, Deyang People’s Hospital, Deyang 618000, Sichuan Province, China
| | - Yan-Hua Peng
- Department of Anesthesiology, Deyang People’s Hospital, Deyang 618000, Sichuan Province, China
- Department of Anesthesiology, Affiliated Hospital of Southwest Medical University, Luzhou 646000, Sichuan Province, China
| | - Rui Zhou
- Department of Anesthesiology and Perioperative Medicine, Shanghai Fourth People’s Hospital, School of Medicine, Tongji University, Shanghai 200434, China
| |
Collapse
|
2
|
Wang YP, Jheng YC, Hou MC, Lu CL. The optimal labelling method for artificial intelligence-assisted polyp detection in colonoscopy. J Formos Med Assoc 2024:S0929-6646(24)00582-5. [PMID: 39730273 DOI: 10.1016/j.jfma.2024.12.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Revised: 12/12/2024] [Accepted: 12/16/2024] [Indexed: 12/29/2024] Open
Abstract
BACKGROUND The methodology in colon polyp labeling in establishing database for ma-chine learning is not well-described and standardized. We aimed to find out the best annotation method to generate the most accurate model in polyp detection. METHODS 3542 colonoscopy polyp images were obtained from endoscopy database of a tertiary medical center. Two experienced endoscopists manually annotated the polyp with (1) exact outline segmentation and (2) using a standard rectangle box close to the polyp margin, and extending 10%, 20%, 30%, 40% and 50% longer in both width and length of the standard rectangle for AI modeling setup. The images were randomly divided into training and validation sets in 4:1 ratio. U-Net convolutional network architecture was used to develop automatic segmentation machine learning model. Another unrelated verification set was established to evaluate the performance of polyp detection by different segmentation methods. RESULTS Extending the bounding box to 20% of the polyp margin represented the best performance in accuracy (95.42%), sensitivity (94.84%) and F1-score (95.41%). Exact outline segmentation model showed the excellent performance in sensitivity (99.6%) and the worst precision (77.47%). The 20% model was the best among the 6 models. (confidence interval = 0.957-0.985; AUC = 0.971). CONCLUSIONS Labelling methodology affect the predictability of AI model in polyp detection. Extending the bounding box to 20% of the polyp margin would result in the best polyp detection predictive model based on AUC data. It is mandatory to establish a standardized way in colon polyp labeling for comparison of the precision of different AI models.
Collapse
Affiliation(s)
- Yen-Po Wang
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taiwan; Division of Gastroenterology, Taipei Veterans General Hospital, Taiwan; Institute of Brain Science, National Yang Ming Chiao Tung University School of Medicine, Taiwan; Faculty of Medicine, National Yang Ming Chiao Tung University School of Medicine, Taiwan
| | - Ying-Chun Jheng
- Department of Medical Research, Taipei Veterans General Hospital, Taiwan; Big Data Center, Taipei Veterans General Hospital, Taiwan
| | - Ming-Chih Hou
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taiwan; Division of Gastroenterology, Taipei Veterans General Hospital, Taiwan; Faculty of Medicine, National Yang Ming Chiao Tung University School of Medicine, Taiwan
| | - Ching-Liang Lu
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taiwan; Division of Gastroenterology, Taipei Veterans General Hospital, Taiwan; Institute of Brain Science, National Yang Ming Chiao Tung University School of Medicine, Taiwan.
| |
Collapse
|
3
|
Wang L, Wan J, Meng X, Chen B, Shao W. MCH-PAN: gastrointestinal polyp detection model integrating multi-scale feature information. Sci Rep 2024; 14:23382. [PMID: 39379452 PMCID: PMC11461898 DOI: 10.1038/s41598-024-74609-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Accepted: 09/27/2024] [Indexed: 10/10/2024] Open
Abstract
The rise of object detection models has brought new breakthroughs to the development of clinical decision support systems. However, in the field of gastrointestinal polyp detection, there are still challenges such as uncertainty in polyp identification and inadequate coping with polyp scale variations. To address these challenges, this paper proposes a novel gastrointestinal polyp object detection model. The model can automatically identify polyp regions in gastrointestinal images and accurately label them. In terms of design, the model integrates multi-channel information to enhance the ability and robustness of channel feature expression, thus better coping with the complexity of polyp structures. At the same time, a hierarchical structure is constructed in the model to enhance the model's adaptability to multi-scale targets, effectively addressing the problem of large-scale variations in polyps. Furthermore, a channel attention mechanism is designed in the model to improve the accuracy of target positioning and reduce uncertainty in diagnosis. By integrating these strategies, the proposed gastrointestinal polyp object detection model can achieve accurate polyp detection, providing clinicians with reliable and valuable references. Experimental results show that the model exhibits superior performance in gastrointestinal polyp detection, which helps improve the diagnostic level of digestive system diseases and provides useful references for related research fields.
Collapse
Affiliation(s)
- Ling Wang
- Faculty of Computer and Software Engineering, Huaiyin Institute of Technology, Huaian, 223003, China.
| | - Jingjing Wan
- Department of Gastroenterology, The Second People's Hospital of Huai'an, The Affiliated Huai'an Hospital of Xuzhou Medical University, Huaian, 223002, China.
| | - Xianchun Meng
- Faculty of Computer and Software Engineering, Huaiyin Institute of Technology, Huaian, 223003, China
| | - Bolun Chen
- Faculty of Computer and Software Engineering, Huaiyin Institute of Technology, Huaian, 223003, China
| | - Wei Shao
- Nanjing University of Aeronautics and Astronautics Shenzhen Research Institute, Shenzhen, 518038, China.
| |
Collapse
|
4
|
Zhang P, Gao C, Huang Y, Chen X, Pan Z, Wang L, Dong D, Li S, Qi X. Artificial intelligence in liver imaging: methods and applications. Hepatol Int 2024; 18:422-434. [PMID: 38376649 DOI: 10.1007/s12072-023-10630-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 12/18/2023] [Indexed: 02/21/2024]
Abstract
Liver disease is regarded as one of the major health threats to humans. Radiographic assessments hold promise in terms of addressing the current demands for precisely diagnosing and treating liver diseases, and artificial intelligence (AI), which excels at automatically making quantitative assessments of complex medical image characteristics, has made great strides regarding the qualitative interpretation of medical imaging by clinicians. Here, we review the current state of medical-imaging-based AI methodologies and their applications concerning the management of liver diseases. We summarize the representative AI methodologies in liver imaging with focusing on deep learning, and illustrate their promising clinical applications across the spectrum of precise liver disease detection, diagnosis and treatment. We also address the current challenges and future perspectives of AI in liver imaging, with an emphasis on feature interpretability, multimodal data integration and multicenter study. Taken together, it is revealed that AI methodologies, together with the large volume of available medical image data, might impact the future of liver disease care.
Collapse
Affiliation(s)
- Peng Zhang
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Chaofei Gao
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Yifei Huang
- Department of Gastroenterology, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Xiangyi Chen
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Zhuoshi Pan
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Lan Wang
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Di Dong
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
| | - Shao Li
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China.
| | - Xiaolong Qi
- Center of Portal Hypertension, Department of Radiology, Zhongda Hospital, Medical School, Nurturing Center of Jiangsu Province for State Laboratory of AI Imaging & Interventional Radiology, Southeast University, Nanjing, China.
| |
Collapse
|
5
|
Zuluaga L, Rich JM, Gupta R, Pedraza A, Ucpinar B, Okhawere KE, Saini I, Dwivedi P, Patel D, Zaytoun O, Menon M, Tewari A, Badani KK. AI-powered real-time annotations during urologic surgery: The future of training and quality metrics. Urol Oncol 2024; 42:57-66. [PMID: 38142209 DOI: 10.1016/j.urolonc.2023.11.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 10/23/2023] [Accepted: 11/02/2023] [Indexed: 12/25/2023]
Abstract
INTRODUCTION AND OBJECTIVE Real-time artificial intelligence (AI) annotation of the surgical field has the potential to automatically extract information from surgical videos, helping to create a robust surgical atlas. This content can be used for surgical education and qualitative initiatives. We demonstrate the first use of AI in urologic robotic surgery to capture live surgical video and annotate key surgical steps and safety milestones in real-time. SUMMARY BACKGROUND DATA While AI models possess the capability to generate automated annotations based on a collection of video images, the real-time implementation of such technology in urological robotic surgery to aid surgeon and training staff it is still pending to be studied. METHODS We conducted an educational symposium, which broadcasted 2 live procedures, a robotic-assisted radical prostatectomy (RARP) and a robotic-assisted partial nephrectomy (RAPN). A surgical AI platform system (Theator, Palo Alto, CA) generated real-time annotations and identified operative safety milestones. This was achieved through trained algorithms, conventional video recognition, and novel Video Transfer Network technology which captures clips in full context, enabling automatic recognition and surgical mapping in real-time. RESULTS Real-time AI annotations for procedure #1, RARP, are found in Table 1. The safety milestone annotations included the apical safety maneuver and deliberate views of structures such as the external iliac vessels and the obturator nerve. Real-time AI annotations for procedure #2, RAPN, are found in Table 1. Safety milestones included deliberate views of structures such as the gonadal vessels and the ureter. AI annotated surgical events included intraoperative ultrasound, temporary clip application and removal, hemostatic powder application, and notable hemorrhage. CONCLUSIONS For the first time, surgical intelligence successfully showcased real-time AI annotations of 2 separate urologic robotic procedures during a live telecast. These annotations may provide the technological framework for send automatic notifications to clinical or operational stakeholders. This technology is a first step in real-time intraoperative decision support, leveraging big data to improve the quality of surgical care, potentially improve surgical outcomes, and support training and education.
Collapse
Affiliation(s)
- Laura Zuluaga
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY.
| | - Jordan Miller Rich
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Raghav Gupta
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Adriana Pedraza
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Burak Ucpinar
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Kennedy E Okhawere
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Indu Saini
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Priyanka Dwivedi
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Dhruti Patel
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Osama Zaytoun
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Mani Menon
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Ashutosh Tewari
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Ketan K Badani
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| |
Collapse
|
6
|
Li A, Javidan AP, Namazi B, Madani A, Forbes TL. Development of an Artificial Intelligence Tool for Intraoperative Guidance During Endovascular Abdominal Aortic Aneurysm Repair. Ann Vasc Surg 2024; 99:96-104. [PMID: 37914075 DOI: 10.1016/j.avsg.2023.08.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 08/02/2023] [Accepted: 08/15/2023] [Indexed: 11/03/2023]
Abstract
BACKGROUND Adverse events during surgery can occur in part due to errors in visual perception and judgment. Deep learning is a branch of artificial intelligence (AI) that has shown promise in providing real-time intraoperative guidance. This study aims to train and test the performance of a deep learning model that can identify inappropriate landing zones during endovascular aneurysm repair (EVAR). METHODS A deep learning model was trained to identify a "No-Go" landing zone during EVAR, defined by coverage of the lowest renal artery by the stent graft. Fluoroscopic images from elective EVAR procedures performed at a single institution and from open-access sources were selected. Annotations of the "No-Go" zone were performed by trained annotators. A 10-fold cross-validation technique was used to evaluate the performance of the model against human annotations. Primary outcomes were intersection-over-union (IoU) and F1 score and secondary outcomes were pixel-wise accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). RESULTS The AI model was trained using 369 images procured from 110 different patients/videos, including 18 patients/videos (44 images) from open-access sources. For the primary outcomes, IoU and F1 were 0.43 (standard deviation ± 0.29) and 0.53 (±0.32), respectively. For the secondary outcomes, accuracy, sensitivity, specificity, NPV, and PPV were 0.97 (±0.002), 0.51 (±0.34), 0.99 (±0.001). 0.99 (±0.002), and 0.62 (±0.34), respectively. CONCLUSIONS AI can effectively identify suboptimal areas of stent deployment during EVAR. Further directions include validating the model on datasets from other institutions and assessing its ability to predict optimal stent graft placement and clinical outcomes.
Collapse
Affiliation(s)
- Allen Li
- Faculty of Medicine & The Ottawa Hospital Research Institute, University of Ottawa, Ottawa, Ontario, Canada
| | - Arshia P Javidan
- Division of Vascular Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Babak Namazi
- Department of Surgery, University of Texas Southwestern Medical Center, Dallas, TX
| | - Amin Madani
- Department of Surgery, University Health Network & University of Toronto, Toronto, Ontario, Canada; Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, Ontario, Canada
| | - Thomas L Forbes
- Department of Surgery, University Health Network & University of Toronto, Toronto, Ontario, Canada.
| |
Collapse
|
7
|
Elshaarawy O, Alboraie M, El-Kassas M. Artificial Intelligence in endoscopy: A future poll. Arab J Gastroenterol 2024; 25:13-17. [PMID: 38220477 DOI: 10.1016/j.ajg.2023.11.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Revised: 09/18/2022] [Accepted: 11/28/2023] [Indexed: 01/16/2024]
Abstract
Artificial Intelligence [AI] has been a trendy topic in recent years, with many developed medical applications. In gastrointestinal endoscopy, AI systems include computer-assisted detection [CADe] for lesion detection as bleedings and polyps and computer-assisted diagnosis [CADx] for optical biopsy and lesion characterization. The technology behind these systems is based on a computer algorithm that is trained for a specific function. This function could be to recognize or characterize target lesions such as colonic polyps. Moreover, AI systems can offer technical assistance to improve endoscopic performance as scope insertion guidance. Currently, we believe that such technologies still lack legal and regulatory validations as a large sector of doctors and patients have concerns. However, there is no doubt that these technologies will bring significant improvement in the endoscopic management of patients as well as save money and time.
Collapse
Affiliation(s)
- Omar Elshaarawy
- Hepatology and Gastroenterology Department, National Liver Institute, Menoufia University, Menoufia, Egypt; Gastroenterology Department, Royal Liverpool University Hospital, NHS, UK
| | - Mohamed Alboraie
- Department of Internal Medicine, Al-Azhar University, Cairo, Egypt
| | - Mohamed El-Kassas
- Endemic Medicine Department, Faculty of Medicine, Helwan University, Cairo, Egypt.
| |
Collapse
|
8
|
Chen X, Duan R, Shen Y, Jiang H. Design and evaluation of an intelligent physical examination system in improving the satisfaction of patients with chronic disease. Heliyon 2024; 10:e23906. [PMID: 38192845 PMCID: PMC10772725 DOI: 10.1016/j.heliyon.2023.e23906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 12/14/2023] [Accepted: 12/15/2023] [Indexed: 01/10/2024] Open
Abstract
Background and Purpose: Enhancing patient satisfaction remains crucial for healthcare quality. The utilization of artificial intelligence (AI) in the Internet of Health Things (loHT) can streamline the medical examination process. Most Traditional Chinese Medicine (TCM) examinations are non-invasive and contribute significantly to patient satisfaction. Our aim was to establish an intelligent physical examination system that amalgamates TCM and Western medicine and to conduct a preliminary investigation into its effectiveness in enhancing the satisfaction of patients with chronic diseases. Materials and methods Experts from clinical departments, the equipment department, and the software development department were invited to participate in group discussions to determine the design principles and organizational structure of the intelligent physical examination system. This system integrates TCM and Western medicine. We compared the satisfaction levels of patients examined using the intelligent physical examination system with those examined using the traditional medical examination system. Results An intelligent physical examination system, combining TCM and Western medicine, was developed. A total of 106 patients were finally enrolled (intelligent group vs. control group) to evaluate satisfaction. There were no statistically significant differences between the intelligent group and the control group in age, gender, education, or income level. We identified significant differences in five aspects of satisfaction: 1) the physical examination environment; 2) the attitude and responsiveness of doctors; 3) the attitude and responsiveness of nurses; 4) the effectiveness of obtaining results; and 5) the information regarding physical examination and medical advice (p < 0.05). Furthermore, these differences remained statistically significant even after adjusting for age, gender, education, and income level. Conclusions The intelligent physical examination system effectively capitalized on the advantages of combining AI with the integration of TCM and Western medicine, substantially optimizing the medical examination process. In comparison to the traditional physical examination system, the intelligent system significantly enhanced patient satisfaction. Future improvements could involve integrating chronic disease follow-up technology into the system.
Collapse
Affiliation(s)
- Xin Chen
- Department of General Practice, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, China
- Department of Geriatrics, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, China
| | - Ruxin Duan
- Beijing CapitalBio Technology Co., Ltd, Beijing, China
| | - Yao Shen
- Department of General Practice, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, China
- Department of Geriatrics, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, China
| | - Hua Jiang
- Department of General Practice, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, China
- Department of Geriatrics, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, China
| |
Collapse
|
9
|
Hanada S, Hayashi Y, Subramani S, Thenuwara K. Pioneering the Integration of Artificial Intelligence in Medical Oral Board Examinations. Cureus 2024; 16:e52318. [PMID: 38357084 PMCID: PMC10866608 DOI: 10.7759/cureus.52318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/11/2024] [Indexed: 02/16/2024] Open
Abstract
We evaluated the use of ChatGPT-4, an advanced artificial intelligence (AI) language model, in medical oral examinations, specifically in anesthesiology. Initially proven adept in written examinations, ChatGPT-4's performance was tested against oral board sample sessions of the American Board of Anesthesiology. Modifications were made to ensure responses were concise and conversationally natural, simulating real patient consultations or oral examinations. The results demonstrate ChatGPT-4's impressive adaptability and potential in oral board examinations as a training and assessment tool in medical education, indicating new avenues for AI application in this field.
Collapse
Affiliation(s)
- Satoshi Hanada
- Anesthesia, University of Iowa Hospitals and Clinics, Iowa City, USA
| | - Yuri Hayashi
- Anesthesia, University of Iowa Hospitals and Clinics, Iowa City, USA
- Department of Anesthesiology and Intensive Care Medicine, Osaka University Graduate School of Medicine, Suita, JPN
| | | | - Kokila Thenuwara
- Anesthesia, University of Iowa Hospitals and Clinics, Iowa City, USA
| |
Collapse
|
10
|
Lou S, Du F, Song W, Xia Y, Yue X, Yang D, Cui B, Liu Y, Han P. Artificial intelligence for colorectal neoplasia detection during colonoscopy: a systematic review and meta-analysis of randomized clinical trials. EClinicalMedicine 2023; 66:102341. [PMID: 38078195 PMCID: PMC10698672 DOI: 10.1016/j.eclinm.2023.102341] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 11/14/2023] [Accepted: 11/15/2023] [Indexed: 05/11/2024] Open
Abstract
BACKGROUND The use of artificial intelligence (AI) in detecting colorectal neoplasia during colonoscopy holds the potential to enhance adenoma detection rates (ADRs) and reduce adenoma miss rates (AMRs). However, varied outcomes have been observed across studies. Thus, this study aimed to evaluate the potential advantages and disadvantages of employing AI-aided systems during colonoscopy. METHODS Using Medical Subject Headings (MeSH) terms and keywords, a comprehensive electronic literature search was performed of the Embase, Medline, and the Cochrane Library databases from the inception of each database until October 04, 2023, in order to identify randomized controlled trials (RCTs) comparing AI-assisted with standard colonoscopy for detecting colorectal neoplasia. Primary outcomes included AMR, ADR, and adenomas detected per colonoscopy (APC). Secondary outcomes comprised the poly missed detection rate (PMR), poly detection rate (PDR), and poly detected per colonoscopy (PPC). We utilized random-effects meta-analyses with Hartung-Knapp adjustment to consolidate results. The prediction interval (PI) and I2 statistics were utilized to quantify between-study heterogeneity. Moreover, meta-regression and subgroup analyses were performed to investigate the potential sources of heterogeneity. This systematic review and meta-analysis is registered with PROSPERO (CRD42023428658). FINDINGS This study encompassed 33 trials involving 27,404 patients. Those undergoing AI-aided colonoscopy experienced a significant decrease in PMR (RR, 0.475; 95% CI, 0.294-0.768; I2 = 87.49%) and AMR (RR, 0.495; 95% CI, 0.390-0.627; I2 = 48.76%). Additionally, a significant increase in PDR (RR, 1.238; 95% CI, 1.158-1.323; I2 = 81.67%) and ADR (RR, 1.242; 95% CI, 1.159-1.332; I2 = 78.87%), along with a significant increase in the rates of PPC (IRR, 1.388; 95% CI, 1.270-1.517; I2 = 91.99%) and APC (IRR, 1.390; 95% CI, 1.277-1.513; I2 = 86.24%), was observed. This resulted in 0.271 more PPCs (95% CI, 0.144-0.259; I2 = 65.61%) and 0.202 more APCs (95% CI, 0.144-0.259; I2 = 68.15%). INTERPRETATION AI-aided colonoscopy significantly enhanced the detection of colorectal neoplasia detection, likely by reducing the miss rate. However, future studies should focus on evaluating the cost-effectiveness and long-term benefits of AI-aided colonoscopy in reducing cancer incidence. FUNDING This work was supported by the Heilongjiang Provincial Natural Science Foundation of China (LH2023H096), the Postdoctoral research project in Heilongjiang Province (LBH-Z22210), the National Natural Science Foundation of China's General Program (82072640) and the Outstanding Youth Project of Heilongjiang Natural Science Foundation (YQ2021H023).
Collapse
Affiliation(s)
- Shenghan Lou
- Department of Colorectal Surgery, Harbin Medical University Cancer Hospital, No.150 Haping Road, Harbin, Heilongjiang, 150081, China
| | - Fenqi Du
- Department of Colorectal Surgery, Harbin Medical University Cancer Hospital, No.150 Haping Road, Harbin, Heilongjiang, 150081, China
| | - Wenjie Song
- Department of Colorectal Surgery, Harbin Medical University Cancer Hospital, No.150 Haping Road, Harbin, Heilongjiang, 150081, China
| | - Yixiu Xia
- Department of Colorectal Surgery, Harbin Medical University Cancer Hospital, No.150 Haping Road, Harbin, Heilongjiang, 150081, China
| | - Xinyu Yue
- Department of Colorectal Surgery, Harbin Medical University Cancer Hospital, No.150 Haping Road, Harbin, Heilongjiang, 150081, China
| | - Da Yang
- Department of Colorectal Surgery, Harbin Medical University Cancer Hospital, No.150 Haping Road, Harbin, Heilongjiang, 150081, China
| | - Binbin Cui
- Department of Colorectal Surgery, Harbin Medical University Cancer Hospital, No.150 Haping Road, Harbin, Heilongjiang, 150081, China
| | - Yanlong Liu
- Department of Colorectal Surgery, Harbin Medical University Cancer Hospital, No.150 Haping Road, Harbin, Heilongjiang, 150081, China
| | - Peng Han
- Department of Colorectal Surgery, Harbin Medical University Cancer Hospital, No.150 Haping Road, Harbin, Heilongjiang, 150081, China
- Key Laboratory of Tumor Immunology in Heilongjiang, No.150 Haping Road, Harbin, Heilongjiang, 150081, China
| |
Collapse
|
11
|
Igaki T, Kitaguchi D, Matsuzaki H, Nakajima K, Kojima S, Hasegawa H, Takeshita N, Kinugasa Y, Ito M. Automatic Surgical Skill Assessment System Based on Concordance of Standardized Surgical Field Development Using Artificial Intelligence. JAMA Surg 2023; 158:e231131. [PMID: 37285142 PMCID: PMC10248810 DOI: 10.1001/jamasurg.2023.1131] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Accepted: 01/28/2023] [Indexed: 06/08/2023]
Abstract
Importance Automatic surgical skill assessment with artificial intelligence (AI) is more objective than manual video review-based skill assessment and can reduce human burden. Standardization of surgical field development is an important aspect of this skill assessment. Objective To develop a deep learning model that can recognize the standardized surgical fields in laparoscopic sigmoid colon resection and to evaluate the feasibility of automatic surgical skill assessment based on the concordance of the standardized surgical field development using the proposed deep learning model. Design, Setting, and Participants This retrospective diagnostic study used intraoperative videos of laparoscopic colorectal surgery submitted to the Japan Society for Endoscopic Surgery between August 2016 and November 2017. Data were analyzed from April 2020 to September 2022. Interventions Videos of surgery performed by expert surgeons with Endoscopic Surgical Skill Qualification System (ESSQS) scores higher than 75 were used to construct a deep learning model able to recognize a standardized surgical field and output its similarity to standardized surgical field development as an AI confidence score (AICS). Other videos were extracted as the validation set. Main Outcomes and Measures Videos with scores less than or greater than 2 SDs from the mean were defined as the low- and high-score groups, respectively. The correlation between AICS and ESSQS score and the screening performance using AICS for low- and high-score groups were analyzed. Results The sample included 650 intraoperative videos, 60 of which were used for model construction and 60 for validation. The Spearman rank correlation coefficient between the AICS and ESSQS score was 0.81. The receiver operating characteristic (ROC) curves for the screening of the low- and high-score groups were plotted, and the areas under the ROC curve for the low- and high-score group screening were 0.93 and 0.94, respectively. Conclusions and Relevance The AICS from the developed model strongly correlated with the ESSQS score, demonstrating the model's feasibility for use as a method of automatic surgical skill assessment. The findings also suggest the feasibility of the proposed model for creating an automated screening system for surgical skills and its potential application to other types of endoscopic procedures.
Collapse
Affiliation(s)
- Takahiro Igaki
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Gastrointestinal Surgery, Tokyo Medical and Dental University Graduate School of Medicine, Yushima, Bunkyo-Ku, Tokyo, Japan
| | - Daichi Kitaguchi
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| | - Hiroki Matsuzaki
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| | - Kei Nakajima
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| | - Shigehiro Kojima
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| | - Hiro Hasegawa
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| | - Nobuyoshi Takeshita
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| | - Yusuke Kinugasa
- Department of Gastrointestinal Surgery, Tokyo Medical and Dental University Graduate School of Medicine, Yushima, Bunkyo-Ku, Tokyo, Japan
| | - Masaaki Ito
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| |
Collapse
|
12
|
Sharma A, Kumar R, Yadav G, Garg P. Artificial intelligence in intestinal polyp and colorectal cancer prediction. Cancer Lett 2023; 565:216238. [PMID: 37211068 DOI: 10.1016/j.canlet.2023.216238] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 05/17/2023] [Accepted: 05/17/2023] [Indexed: 05/23/2023]
Abstract
Artificial intelligence (AI) algorithms and their application to disease detection and decision support for healthcare professions have greatly evolved in the recent decade. AI has been widely applied and explored in gastroenterology for endoscopic analysis to diagnose intestinal cancers, premalignant polyps, gastrointestinal inflammatory lesions, and bleeding. Patients' responses to treatments and prognoses have both been predicted using AI by combining multiple algorithms. In this review, we explored the recent applications of AI algorithms in the identification and characterization of intestinal polyps and colorectal cancer predictions. AI-based prediction models have the potential to help medical practitioners diagnose, establish prognoses, and find accurate conclusions for the treatment of patients. With the understanding that rigorous validation of AI approaches using randomized controlled studies is solicited before widespread clinical use by health authorities, the article also discusses the limitations and challenges associated with deploying AI systems to diagnose intestinal malignancies and premalignant lesions.
Collapse
Affiliation(s)
- Anju Sharma
- Department of Pharmacoinformatics, National Institute of Pharmaceutical Education and Research, S.A.S Nagar, 160062, Punjab, India
| | - Rajnish Kumar
- Amity Institute of Biotechnology, Amity University Uttar Pradesh, Lucknow Campus, Uttar Pradesh, 226010, India; Department of Veterinary Medicine and Surgery, College of Veterinary Medicine, University of Missouri, Columbia, MO, USA
| | - Garima Yadav
- Amity Institute of Biotechnology, Amity University Uttar Pradesh, Lucknow Campus, Uttar Pradesh, 226010, India
| | - Prabha Garg
- Department of Pharmacoinformatics, National Institute of Pharmaceutical Education and Research, S.A.S Nagar, 160062, Punjab, India.
| |
Collapse
|
13
|
Wang P, Liu XG, Kang M, Peng X, Shu ML, Zhou GY, Liu PX, Xiong F, Deng MM, Xia HF, Li JJ, Long XQ, Song Y, Li LP. Artificial intelligence empowers the second-observer strategy for colonoscopy: a randomized clinical trial. Gastroenterol Rep (Oxf) 2023; 11:goac081. [PMID: 36686571 PMCID: PMC9850273 DOI: 10.1093/gastro/goac081] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 11/15/2022] [Accepted: 11/17/2022] [Indexed: 01/21/2023] Open
Abstract
Background In colonoscopy screening for colorectal cancer, human vision limitations may lead to higher miss rate of lesions; artificial intelligence (AI) assistance has been demonstrated to improve polyp detection. However, there still lacks direct evidence to demonstrate whether AI is superior to trainees or experienced nurses as a second observer to increase adenoma detection during colonoscopy. In this study, we aimed to compare the effectiveness of assistance from AI and human observer during colonoscopy. Methods A prospective multicenter randomized study was conducted from 2 September 2019 to 29 May 2020 at four endoscopy centers in China. Eligible patients were randomized to either computer-aided detection (CADe)-assisted group or observer-assisted group. The primary outcome was adenoma per colonoscopy (APC). Secondary outcomes included polyp per colonoscopy (PPC), adenoma detection rate (ADR), and polyp detection rate (PDR). We compared continuous variables and categorical variables by using R studio (version 3.4.4). Results A total of 1,261 (636 in the CADe-assisted group and 625 in the observer-assisted group) eligible patients were analysed. APC (0.42 vs 0.35, P = 0.034), PPC (1.13 vs 0.81, P < 0.001), PDR (47.5% vs 37.4%, P < 0.001), ADR (25.8% vs 24.0%, P = 0.464), the number of detected sessile polyps (683 vs 464, P < 0.001), and sessile adenomas (244 vs 182, P = 0.005) were significantly higher in the CADe-assisted group than in the observer-assisted group. False detections of the CADe system were lower than those of the human observer (122 vs 191, P < 0.001). Conclusions Compared with the human observer, the CADe system may improve the clinical outcome of colonoscopy and reduce disturbance to routine practice (Chictr.org.cn No.: ChiCTR1900025235).
Collapse
Affiliation(s)
| | | | - Min Kang
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, P. R. China
| | - Xue Peng
- Department of Gastroenterology, Xinqiao Hospital, Third Military Medical University, Chongqing, P. R. China
| | - Mei-Ling Shu
- Department of Gastroenterology, Suining Central Hospital, Suining, Sichuan, P. R. China
| | - Guan-Yu Zhou
- Department of Gastroenterology, Sichuan Provincial People’s Hospital, University of Electronic Science and Technology of China, Chengdu, Sichuan, P. R. China
| | - Pei-Xi Liu
- Department of Gastroenterology, Sichuan Provincial People’s Hospital, University of Electronic Science and Technology of China, Chengdu, Sichuan, P. R. China
| | - Fei Xiong
- Department of Gastroenterology, Sichuan Provincial People’s Hospital, University of Electronic Science and Technology of China, Chengdu, Sichuan, P. R. China
| | - Ming-Ming Deng
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, P. R. China
| | - Hong-Fen Xia
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, P. R. China
| | - Jian-Jun Li
- Department of Gastroenterology, Xinqiao Hospital, Third Military Medical University, Chongqing, P. R. China
| | - Xiao-Qi Long
- Department of Gastroenterology, Suining Central Hospital, Suining, Sichuan, P. R. China
| | - Yan Song
- Department of Gastroenterology, Sichuan Provincial People’s Hospital, University of Electronic Science and Technology of China, Chengdu, Sichuan, P. R. China
| | - Liang-Ping Li
- Corresponding author. Department of Gastroenterology, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, No.32 West Second Section, First Ring Road, Chengdu, Sichuan 610072, China. Tel: +86-28-8739 3927;
| |
Collapse
|
14
|
Automated Three-Dimensional Liver Reconstruction with Artificial Intelligence for Virtual Hepatectomy. J Gastrointest Surg 2022; 26:2119-2127. [PMID: 35941495 DOI: 10.1007/s11605-022-05415-9] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 07/14/2022] [Indexed: 01/31/2023]
Abstract
OBJECTIVE To validate the newly developed artificial intelligence (AI)-assisted simulation by evaluating the speed of three-dimensional (3D) reconstruction and accuracy of segmental volumetry among patients with liver tumors. BACKGROUND AI with a deep learning algorithm based on healthy liver computer tomography images has been developed to assist three-dimensional liver reconstruction in virtual hepatectomy. METHODS 3D reconstruction using hepatic computed tomography scans of 144 patients with liver tumors was performed using two different versions of Synapse 3D (Fujifilm, Tokyo, Japan): the manual method based on the tracking algorithm and the AI-assisted method. Processing time to 3D reconstruction and volumetry of whole liver, tumor-containing and tumor-free segments were compared. RESULTS The median total liver volume and the volume ratio of a tumor-containing and a tumor-free segment were calculated as 1035 mL, 9.4%, and 9.8% by the AI-assisted reconstruction, whereas 1120 mL, 9.9%, and 9.3% by the manual reconstruction method. The mean absolute deviations were 16.7 mL and 1.0% in the tumor-containing segment and 15.5 mL and 1.0% in the tumor-free segment. The processing time was shorter in the AI-assisted (2.1 vs. 35.0 min; p < 0.001). CONCLUSIONS The virtual hepatectomy, including functional liver volumetric analysis, using the 3D liver models reconstructed by the AI-assisted methods, was reliable for the practical planning of liver tumor resections.
Collapse
|
15
|
Madani A, Namazi B, Altieri MS, Hashimoto DA, Rivera AM, Pucher PH, Navarrete-Welton A, Sankaranarayanan G, Brunt LM, Okrainec A, Alseidi A. Artificial Intelligence for Intraoperative Guidance: Using Semantic Segmentation to Identify Surgical Anatomy During Laparoscopic Cholecystectomy. Ann Surg 2022; 276:363-369. [PMID: 33196488 PMCID: PMC8186165 DOI: 10.1097/sla.0000000000004594] [Citation(s) in RCA: 153] [Impact Index Per Article: 51.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
OBJECTIVE The aim of this study was to develop and evaluate the performance of artificial intelligence (AI) models that can identify safe and dangerous zones of dissection, and anatomical landmarks during laparoscopic cholecystectomy (LC). SUMMARY BACKGROUND DATA Many adverse events during surgery occur due to errors in visual perception and judgment leading to misinterpretation of anatomy. Deep learning, a subfield of AI, can potentially be used to provide real-time guidance intraoperatively. METHODS Deep learning models were developed and trained to identify safe (Go) and dangerous (No-Go) zones of dissection, liver, gallbladder, and hepatocystic triangle during LC. Annotations were performed by 4 high-volume surgeons. AI predictions were evaluated using 10-fold cross-validation against annotations by expert surgeons. Primary outcomes were intersection- over-union (IOU) and F1 score (validated spatial correlation indices), and secondary outcomes were pixel-wise accuracy, sensitivity, specificity, ± standard deviation. RESULTS AI models were trained on 2627 random frames from 290 LC videos, procured from 37 countries, 136 institutions, and 153 surgeons. Mean IOU, F1 score, accuracy, sensitivity, and specificity for the AI to identify Go zones were 0.53 (±0.24), 0.70 (±0.28), 0.94 (±0.05), 0.69 (±0.20). and 0.94 (±0.03), respectively. For No-Go zones, these metrics were 0.71 (±0.29), 0.83 (±0.31), 0.95 (±0.06), 0.80 (±0.21), and 0.98 (±0.05), respectively. Mean IOU for identification of the liver, gallbladder, and hepatocystic triangle were: 0.86 (±0.12), 0.72 (±0.19), and 0.65 (±0.22), respectively. CONCLUSIONS AI can be used to identify anatomy within the surgical field. This technology may eventually be used to provide real-time guidance and minimize the risk of adverse events.
Collapse
Affiliation(s)
- Amin Madani
- Department of Surgery, University Health Network, Toronto, ON, Canada
| | - Babak Namazi
- Center for Evidence-Based Simulation, Baylor University Medical Center, Dallas, TX, USA
| | - Maria S. Altieri
- Department of Surgery, East Carolina University Brody School of Medicine, Greenville, NC, USA
| | - Daniel A. Hashimoto
- Surgical AI & Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, Boston, MA, USA
| | | | | | - Allison Navarrete-Welton
- Surgical AI & Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, Boston, MA, USA
| | | | - L. Michael Brunt
- Department of Surgery, Washington University School of Medicine, St. Louis, MO, USA
| | - Allan Okrainec
- Department of Surgery, University Health Network, Toronto, ON, Canada
| | - Adnan Alseidi
- Department of Surgery, University of California – San Francisco, San Francisco, CA, USA
| |
Collapse
|
16
|
Yang CB, Kim SH, Lim YJ. Preparation of image databases for artificial intelligence algorithm development in gastrointestinal endoscopy. Clin Endosc 2022; 55:594-604. [PMID: 35636749 PMCID: PMC9539300 DOI: 10.5946/ce.2021.229] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Accepted: 03/07/2022] [Indexed: 12/09/2022] Open
Abstract
Over the past decade, technological advances in deep learning have led to the introduction of artificial intelligence (AI) in medical imaging. The most commonly used structure in image recognition is the convolutional neural network, which mimics the action of the human visual cortex. The applications of AI in gastrointestinal endoscopy are diverse. Computer-aided diagnosis has achieved remarkable outcomes with recent improvements in machine-learning techniques and advances in computer performance. Despite some hurdles, the implementation of AI-assisted clinical practice is expected to aid endoscopists in real-time decision-making. In this summary, we reviewed state-of-the-art AI in the field of gastrointestinal endoscopy and offered a practical guide for building a learning image dataset for algorithm development.
Collapse
Affiliation(s)
- Chang Bong Yang
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang, Korea
| | - Sang Hoon Kim
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang, Korea
| | - Yun Jeong Lim
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang, Korea
| |
Collapse
|
17
|
Real-time detection of the recurrent laryngeal nerve in thoracoscopic esophagectomy using artificial intelligence. Surg Endosc 2022; 36:5531-5539. [PMID: 35476155 DOI: 10.1007/s00464-022-09268-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Accepted: 04/09/2022] [Indexed: 10/18/2022]
Abstract
BACKGROUND Artificial intelligence (AI) has been largely investigated in the field of surgery, particularly in quality assurance. However, AI-guided navigation during surgery has not yet been put into practice because a sufficient level of performance has not been reached. We aimed to develop deep learning-based AI image processing software to identify the location of the recurrent laryngeal nerve during thoracoscopic esophagectomy and determine whether the incidence of recurrent laryngeal nerve paralysis is reduced using this software. METHODS More than 3000 images extracted from 20 thoracoscopic esophagectomy videos and 40 images extracted from 8 thoracoscopic esophagectomy videos were annotated for identification of the recurrent laryngeal nerve. The Dice coefficient was used to assess the detection performance of the model and that of surgeons (specialized esophageal surgeons and certified general gastrointestinal surgeons). The performance was compared using a test set. RESULTS The average Dice coefficient of the AI model was 0.58. This was not significantly different from the Dice coefficient of the group of specialized esophageal surgeons (P = 0.26); however, it was significantly higher than that of the group of certified general gastrointestinal surgeons (P = 0.019). CONCLUSIONS Our software's performance in identification of the recurrent laryngeal nerve was superior to that of general surgeons and almost reached that of specialized surgeons. Our software provides real-time identification and will be useful for thoracoscopic esophagectomy after further developments.
Collapse
|
18
|
Madani A, Feldman LS. Artificial Intelligence for Augmenting Perioperative Surgical Decision-Making-Are We There Yet? JAMA Surg 2021; 156:941. [PMID: 34232282 DOI: 10.1001/jamasurg.2021.3050] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Affiliation(s)
- Amin Madani
- Department of Surgery, University Health Network, Toronto General Hospital, Toronto, Ontario, Canada
| | - Liane S Feldman
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
19
|
Barua I, Vinsard DG, Jodal HC, Løberg M, Kalager M, Holme Ø, Misawa M, Bretthauer M, Mori Y. Artificial intelligence for polyp detection during colonoscopy: a systematic review and meta-analysis. Endoscopy 2021; 53:277-284. [PMID: 32557490 DOI: 10.1055/a-1201-7165] [Citation(s) in RCA: 148] [Impact Index Per Article: 37.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
BACKGROUND Artificial intelligence (AI)-based polyp detection systems are used during colonoscopy with the aim of increasing lesion detection and improving colonoscopy quality. PATIENTS AND METHODS We performed a systematic review and meta-analysis of prospective trials to determine the value of AI-based polyp detection systems for detection of polyps and colorectal cancer. We performed systematic searches in MEDLINE, EMBASE, and Cochrane CENTRAL. Independent reviewers screened studies and assessed eligibility, certainty of evidence, and risk of bias. We compared colonoscopy with and without AI by calculating relative and absolute risks and mean differences for detection of polyps, adenomas, and colorectal cancer. RESULTS Five randomized trials were eligible for analysis. Colonoscopy with AI increased adenoma detection rates (ADRs) and polyp detection rates (PDRs) compared to colonoscopy without AI (values given with 95 %CI). ADR with AI was 29.6 % (22.2 % - 37.0 %) versus 19.3 % (12.7 % - 25.9 %) without AI; relative risk (RR] 1.52 (1.31 - 1.77), with high certainty. PDR was 45.4 % (41.1 % - 49.8 %) with AI versus 30.6 % (26.5 % - 34.6 %) without AI; RR 1.48 (1.37 - 1.60), with high certainty. There was no difference in detection of advanced adenomas (mean advanced adenomas per colonoscopy 0.03 for each group, high certainty). Mean adenomas detected per colonoscopy was higher for small adenomas (≤ 5 mm) for AI versus non-AI (mean difference 0.15 [0.12 - 0.18]), but not for larger adenomas (> 5 - ≤ 10 mm, mean difference 0.03 [0.01 - 0.05]; > 10 mm, mean difference 0.01 [0.00 - 0.02]; high certainty). Data on cancer are unavailable. CONCLUSIONS AI-based polyp detection systems during colonoscopy increase detection of small nonadvanced adenomas and polyps, but not of advanced adenomas.
Collapse
Affiliation(s)
- Ishita Barua
- Clinical Effectiveness Research Group, Institute of Health and Society, University of Oslo, and Department of Transplantation Medicine Oslo University Hospital, Oslo, Norway
| | - Daniela Guerrero Vinsard
- Department of Internal Medicine, University of Connecticut Health Centre, Connecticut, USA.,Department of Gastroenterology and Hepatology, Mayo Clinic, Rochester, Minnesota, USA
| | - Henriette C Jodal
- Clinical Effectiveness Research Group, Institute of Health and Society, University of Oslo, and Department of Transplantation Medicine Oslo University Hospital, Oslo, Norway
| | - Magnus Løberg
- Clinical Effectiveness Research Group, Institute of Health and Society, University of Oslo, and Department of Transplantation Medicine Oslo University Hospital, Oslo, Norway
| | - Mette Kalager
- Clinical Effectiveness Research Group, Institute of Health and Society, University of Oslo, and Department of Transplantation Medicine Oslo University Hospital, Oslo, Norway
| | - Øyvind Holme
- Clinical Effectiveness Research Group, Institute of Health and Society, University of Oslo, and Department of Transplantation Medicine Oslo University Hospital, Oslo, Norway
| | - Masashi Misawa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan
| | - Michael Bretthauer
- Clinical Effectiveness Research Group, Institute of Health and Society, University of Oslo, and Department of Transplantation Medicine Oslo University Hospital, Oslo, Norway
| | - Yuichi Mori
- Clinical Effectiveness Research Group, Institute of Health and Society, University of Oslo, and Department of Transplantation Medicine Oslo University Hospital, Oslo, Norway.,Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan
| |
Collapse
|
20
|
Jheng YC, Wang YP, Lin HE, Sung KY, Chu YC, Wang HS, Jiang JK, Hou MC, Lee FY, Lu CL. A novel machine learning-based algorithm to identify and classify lesions and anatomical landmarks in colonoscopy images. Surg Endosc 2021; 36:640-650. [PMID: 33591447 DOI: 10.1007/s00464-021-08331-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Accepted: 01/13/2021] [Indexed: 02/06/2023]
Abstract
OBJECTIVES Computer-aided diagnosis (CAD)-based artificial intelligence (AI) has been shown to be highly accurate for detecting and characterizing colon polyps. However, the application of AI to identify normal colon landmarks and differentiate multiple colon diseases has not yet been established. We aimed to develop a convolutional neural network (CNN)-based algorithm (GUTAID) to recognize different colon lesions and anatomical landmarks. METHODS Colonoscopic images were obtained to train and validate the AI classifiers. An independent dataset was collected for verification. The architecture of GUTAID contains two major sub-models: the Normal, Polyp, Diverticulum, Cecum and CAncer (NPDCCA) and Narrow-Band Imaging for Adenomatous/Hyperplastic polyps (NBI-AH) models. The development of GUTAID was based on the 16-layer Visual Geometry Group (VGG16) architecture and implemented on Google Cloud Platform. RESULTS In total, 7838 colonoscopy images were used for developing and validating the AI model. An additional 1273 images were independently applied to verify the GUTAID. The accuracy for GUTAID in detecting various colon lesions/landmarks is 93.3% for polyps, 93.9% for diverticula, 91.7% for cecum, 97.5% for cancer, and 83.5% for adenomatous/hyperplastic polyps. CONCLUSIONS A CNN-based algorithm (GUTAID) to identify colonic abnormalities and landmarks was successfully established with high accuracy. This GUTAID system can further characterize polyps for optical diagnosis. We demonstrated that AI classification methodology is feasible to identify multiple and different colon diseases.
Collapse
Affiliation(s)
- Ying-Chun Jheng
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taipei, Taiwan.,Division of Gastroenterology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan.,Department of Medical Research, Taipei Veterans General Hospital, Taipei, Taiwan.,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan
| | - Yen-Po Wang
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taipei, Taiwan.,Division of Gastroenterology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan.,Institute of Brain Science, National Yang-Ming University School of Medicine, Taipei, Taiwan.,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan
| | - Hung-En Lin
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taipei, Taiwan.,Division of Gastroenterology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan.,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan
| | - Kuang-Yi Sung
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taipei, Taiwan.,Division of Gastroenterology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan.,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan
| | - Yuan-Chia Chu
- Information Management Office, Taipei Veterans General Hospital, Taipei, Taiwan.,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan
| | - Huann-Sheng Wang
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taipei, Taiwan.,Division of Colon and Rectum Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan.,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan
| | - Jeng-Kai Jiang
- Division of Colon and Rectum Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan.,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan
| | - Ming-Chih Hou
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taipei, Taiwan.,Division of Gastroenterology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan.,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan
| | - Fa-Yauh Lee
- Division of Gastroenterology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan.,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan
| | - Ching-Liang Lu
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taipei, Taiwan. .,Division of Gastroenterology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan. .,Institute of Brain Science, National Yang-Ming University School of Medicine, Taipei, Taiwan. .,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan.
| |
Collapse
|
21
|
Lei S, Wang Z, Tu M, Liu P, Lei L, Xiao X, Zhou G, Liu X, Li L, Wang P. Adenoma detection rate is not influenced by the time of day in computer-aided detection colonoscopy. Medicine (Baltimore) 2020; 99:e23685. [PMID: 33371110 PMCID: PMC7748207 DOI: 10.1097/md.0000000000023685] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/28/2020] [Accepted: 11/13/2020] [Indexed: 12/29/2022] Open
Abstract
Because of endoscopist fatigue, the time of colonoscopy have been shown to influence adenoma detection rate (ADR). Computer-aided detection (CADe) provides simultaneous visual alerts on polyps during colonoscopy and thus to increase adenoma detection rate. This is attributable to the strengthening of endoscopists diagnostic level and alleviation of fatigue. The aim of the study was to investigate whether CADe colonoscopy could eliminate the influence of the afternoon fatigue on ADR.We retrospectively analyzed the recorded data of patients who were performed CADe colonoscopy from September 2017 to February 2019 in Endoscopy Center of Sichuan Provincial People's Hospital. Patients demographic as well as baseline data recorded during colonoscopy were used for the analysis. Morning colonoscopy was defined as colonoscopic procedures starting between 8:00 am and 12:00 noon. Afternoon colonoscopy was defined as procedures starting at 2:00 pm and thereafter. The primary outcome was ADR. Univariate analysis and multivariate regression analysis were also performed.A total of 484 CADe colonoscopies were performed by 4 endoscopists in the study. The overall polyp detection rate was 52% and overall ADR was 35.5%. The mean number of adenomas detected per colonoscopy (0.62 vs 0.61, P > .05) and ADR (0.36 vs 0.35, P > .05) were similar in the am and pm group. Multivariable analysis shows that the ADR of CADe colonoscopy was influenced by the age (P < .001), gender (P = .004) and withdrawal time (P < .001), no correlation was found regarding bowel preparation (P = .993) and endoscopist experience (P = .804).CADe colonoscopy could eliminate the influence of the afternoon fatigue on ADR. The ADR during CADe colonoscopy is significantly affected by age, gender and withdrawal time.
Collapse
Affiliation(s)
| | | | - Mengtian Tu
- Department of Internal Medicine, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu, China
| | | | - Lei Lei
- Department of Gastroenterology
| | | | | | | | | | - Pu Wang
- Department of Gastroenterology
| |
Collapse
|
22
|
Pannala R, Krishnan K, Melson J, Parsi MA, Schulman AR, Sullivan S, Trikudanathan G, Trindade AJ, Watson RR, Maple JT, Lichtenstein DR. Artificial intelligence in gastrointestinal endoscopy. VIDEOGIE : AN OFFICIAL VIDEO JOURNAL OF THE AMERICAN SOCIETY FOR GASTROINTESTINAL ENDOSCOPY 2020; 5:598-613. [PMID: 33319126 PMCID: PMC7732722 DOI: 10.1016/j.vgie.2020.08.013] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
BACKGROUND AND AIMS Artificial intelligence (AI)-based applications have transformed several industries and are widely used in various consumer products and services. In medicine, AI is primarily being used for image classification and natural language processing and has great potential to affect image-based specialties such as radiology, pathology, and gastroenterology (GE). This document reviews the reported applications of AI in GE, focusing on endoscopic image analysis. METHODS The MEDLINE database was searched through May 2020 for relevant articles by using key words such as machine learning, deep learning, artificial intelligence, computer-aided diagnosis, convolutional neural networks, GI endoscopy, and endoscopic image analysis. References and citations of the retrieved articles were also evaluated to identify pertinent studies. The manuscript was drafted by 2 authors and reviewed in person by members of the American Society for Gastrointestinal Endoscopy Technology Committee and subsequently by the American Society for Gastrointestinal Endoscopy Governing Board. RESULTS Deep learning techniques such as convolutional neural networks have been used in several areas of GI endoscopy, including colorectal polyp detection and classification, analysis of endoscopic images for diagnosis of Helicobacter pylori infection, detection and depth assessment of early gastric cancer, dysplasia in Barrett's esophagus, and detection of various abnormalities in wireless capsule endoscopy images. CONCLUSIONS The implementation of AI technologies across multiple GI endoscopic applications has the potential to transform clinical practice favorably and improve the efficiency and accuracy of current diagnostic methods.
Collapse
Key Words
- ADR, adenoma detection rate
- AI, artificial intelligence
- AMR, adenoma miss rate
- ANN, artificial neural network
- BE, Barrett’s esophagus
- CAD, computer-aided diagnosis
- CADe, CAD studies for colon polyp detection
- CADx, CAD studies for colon polyp classification
- CI, confidence interval
- CNN, convolutional neural network
- CRC, colorectal cancer
- DL, deep learning
- GI, gastroenterology
- HD-WLE, high-definition white light endoscopy
- HDWL, high-definition white light
- ML, machine learning
- NBI, narrow-band imaging
- NPV, negative predictive value
- PIVI, preservation and Incorporation of Valuable Endoscopic Innovations
- SVM, support vector machine
- VLE, volumetric laser endomicroscopy
- WCE, wireless capsule endoscopy
- WL, white light
Collapse
Affiliation(s)
- Rahul Pannala
- Department of Gastroenterology and Hepatology, Mayo Clinic, Scottsdale, Arizona
| | - Kumar Krishnan
- Division of Gastroenterology, Department of Internal Medicine, Harvard Medical School and Massachusetts General Hospital, Boston, Massachusetts
| | - Joshua Melson
- Division of Digestive Diseases, Department of Internal Medicine, Rush University Medical Center, Chicago, Illinois
| | - Mansour A Parsi
- Section for Gastroenterology and Hepatology, Tulane University Health Sciences Center, New Orleans, Louisiana
| | - Allison R Schulman
- Department of Gastroenterology, Michigan Medicine, University of Michigan, Ann Arbor, Michigan
| | - Shelby Sullivan
- Division of Gastroenterology and Hepatology, University of Colorado School of Medicine, Aurora, Colorado
| | - Guru Trikudanathan
- Department of Gastroenterology, Hepatology and Nutrition, University of Minnesota, Minneapolis, Minnesota
| | - Arvind J Trindade
- Department of Gastroenterology, Zucker School of Medicine at Hofstra/Northwell, Long Island Jewish Medical Center, New Hyde Park, New York
| | - Rabindra R Watson
- Department of Gastroenterology, Interventional Endoscopy Services, California Pacific Medical Center, San Francisco, California
| | - John T Maple
- Division of Digestive Diseases and Nutrition, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma
| | - David R Lichtenstein
- Division of Gastroenterology, Boston Medical Center, Boston University School of Medicine, Boston, Massachusetts
| |
Collapse
|
23
|
Ramakrishna RR, Abd Hamid Z, Wan Zaki WMD, Huddin AB, Mathialagan R. Stem cell imaging through convolutional neural networks: current issues and future directions in artificial intelligence technology. PeerJ 2020; 8:e10346. [PMID: 33240655 PMCID: PMC7680049 DOI: 10.7717/peerj.10346] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Accepted: 10/21/2020] [Indexed: 12/12/2022] Open
Abstract
Stem cells are primitive and precursor cells with the potential to reproduce into diverse mature and functional cell types in the body throughout the developmental stages of life. Their remarkable potential has led to numerous medical discoveries and breakthroughs in science. As a result, stem cell-based therapy has emerged as a new subspecialty in medicine. One promising stem cell being investigated is the induced pluripotent stem cell (iPSC), which is obtained by genetically reprogramming mature cells to convert them into embryonic-like stem cells. These iPSCs are used to study the onset of disease, drug development, and medical therapies. However, functional studies on iPSCs involve the analysis of iPSC-derived colonies through manual identification, which is time-consuming, error-prone, and training-dependent. Thus, an automated instrument for the analysis of iPSC colonies is needed. Recently, artificial intelligence (AI) has emerged as a novel technology to tackle this challenge. In particular, deep learning, a subfield of AI, offers an automated platform for analyzing iPSC colonies and other colony-forming stem cells. Deep learning rectifies data features using a convolutional neural network (CNN), a type of multi-layered neural network that can play an innovative role in image recognition. CNNs are able to distinguish cells with high accuracy based on morphologic and textural changes. Therefore, CNNs have the potential to create a future field of deep learning tasks aimed at solving various challenges in stem cell studies. This review discusses the progress and future of CNNs in stem cell imaging for therapy and research.
Collapse
Affiliation(s)
- Ramanaesh Rao Ramakrishna
- Biomedical Science Programme and Centre for Diagnostic, Therapeutic and Investigative Science, Faculty of Health Sciences, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia
| | - Zariyantey Abd Hamid
- Biomedical Science Programme and Centre for Diagnostic, Therapeutic and Investigative Science, Faculty of Health Sciences, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia
| | - Wan Mimi Diyana Wan Zaki
- Department of Electrical, Electronic & Systems Engineering, Faculty of Engineering & Built Environment, Universiti Kebangsaan Malaysia, Bangi, Selangor, Malaysia
| | - Aqilah Baseri Huddin
- Department of Electrical, Electronic & Systems Engineering, Faculty of Engineering & Built Environment, Universiti Kebangsaan Malaysia, Bangi, Selangor, Malaysia
| | - Ramya Mathialagan
- Biomedical Science Programme and Centre for Diagnostic, Therapeutic and Investigative Science, Faculty of Health Sciences, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia
| |
Collapse
|
24
|
Wang P, Liu P, Glissen Brown JR, Berzin TM, Zhou G, Lei S, Liu X, Li L, Xiao X. Lower Adenoma Miss Rate of Computer-Aided Detection-Assisted Colonoscopy vs Routine White-Light Colonoscopy in a Prospective Tandem Study. Gastroenterology 2020; 159:1252-1261.e5. [PMID: 32562721 DOI: 10.1053/j.gastro.2020.06.023] [Citation(s) in RCA: 143] [Impact Index Per Article: 28.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Revised: 05/10/2020] [Accepted: 06/10/2020] [Indexed: 12/14/2022]
Abstract
BACKGROUND AND AIMS Up to 30% of adenomas might be missed during screening colonoscopy-these could be polyps that appear on-screen but are not recognized by endoscopists or polyps that are in locations that do not appear on the screen at all. Computer-aided detection (CADe) systems, based on deep learning, might reduce rates of missed adenomas by displaying visual alerts that identify precancerous polyps on the endoscopy monitor in real time. We compared adenoma miss rates of CADe colonoscopy vs routine white-light colonoscopy. METHODS We performed a prospective study of patients, 18-75 years old, referred for diagnostic, screening, or surveillance colonoscopies at a single endoscopy center of Sichuan Provincial People's Hospital from June 3, 2019 through September 24, 2019. Same day, tandem colonoscopies were performed for each participant by the same endoscopist. Patients were randomly assigned to groups that received either CADe colonoscopy (n=184) or routine colonoscopy (n=185) first, followed immediately by the other procedure. Endoscopists were blinded to the group each patient was assigned to until immediately before the start of each colonoscopy. Polyps that were missed by the CADe system but detected by endoscopists were classified as missed polyps. False polyps were those continuously traced by the CADe system but then determined not to be polyps by the endoscopists. The primary endpoint was adenoma miss rate, which was defined as the number of adenomas detected in the second-pass colonoscopy divided by the total number of adenomas detected in both passes. RESULTS The adenoma miss rate was significantly lower with CADe colonoscopy (13.89%; 95% CI, 8.24%-19.54%) than with routine colonoscopy (40.00%; 95% CI, 31.23%-48.77%, P<.0001). The polyp miss rate was significantly lower with CADe colonoscopy (12.98%; 95% CI, 9.08%-16.88%) than with routine colonoscopy (45.90%; 95% CI, 39.65%-52.15%) (P<.0001). Adenoma miss rates in ascending, transverse, and descending colon were significantly lower with CADe colonoscopy than with routine colonoscopy (ascending colon 6.67% vs 39.13%; P=.0095; transverse colon 16.33% vs 45.16%; P=.0065; and descending colon 12.50% vs 40.91%, P=.0364). CONCLUSIONS CADe colonoscopy reduced the overall miss rate of adenomas by endoscopists using white-light endoscopy. Routine use of CADe might reduce the incidence of interval colon cancers. chictr.org.cn study no: ChiCTR1900023086.
Collapse
Affiliation(s)
- Pu Wang
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People's Hospital, Chengdu, China
| | - Peixi Liu
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People's Hospital, Chengdu, China
| | - Jeremy R Glissen Brown
- Center for Advanced Endoscopy, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts
| | - Tyler M Berzin
- Center for Advanced Endoscopy, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts
| | - Guanyu Zhou
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People's Hospital, Chengdu, China
| | - Shan Lei
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People's Hospital, Chengdu, China
| | - Xiaogang Liu
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People's Hospital, Chengdu, China
| | - Liangping Li
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People's Hospital, Chengdu, China
| | - Xun Xiao
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People's Hospital, Chengdu, China.
| |
Collapse
|