1
|
Pehrson LM, Petersen J, Panduro NS, Lauridsen CA, Carlsen JF, Darkner S, Nielsen MB, Ingala S. AI-Guided Delineation of Gross Tumor Volume for Body Tumors: A Systematic Review. Diagnostics (Basel) 2025; 15:846. [PMID: 40218196 PMCID: PMC11988838 DOI: 10.3390/diagnostics15070846] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2025] [Revised: 03/14/2025] [Accepted: 03/18/2025] [Indexed: 04/14/2025] Open
Abstract
Background: Approximately 50% of all oncological patients undergo radiation therapy, where personalized planning of treatment relies on gross tumor volume (GTV) delineation. Manual delineation of GTV is time-consuming, operator-dependent, and prone to variability. An increasing number of studies apply artificial intelligence (AI) techniques to automate such delineation processes. Methods: To perform a systematic review comparing the performance of AI models in tumor delineations within the body (thoracic cavity, esophagus, abdomen, and pelvis, or soft tissue and bone). A retrospective search of five electronic databases was performed between January 2017 and February 2025. Original research studies developing and/or validating algorithms delineating GTV in CT, MRI, and/or PET were included. The Checklist for Artificial Intelligence in Medical Imaging (CLAIM) and Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis statement and checklist (TRIPOD) were used to assess the risk, bias, and reporting adherence. Results: After screening 2430 articles, 48 were included. The pooled diagnostic performance from the use of AI algorithms across different tumors and topological areas ranged 0.62-0.92 in dice similarity coefficient (DSC) and 1.33-47.10 mm in Hausdorff distance (HD). The algorithms with the highest DSC deployed an encoder-decoder architecture. Conclusions: AI algorithms demonstrate a high level of concordance with clinicians in GTV delineation. Translation to clinical settings requires the building of trust, improvement in performance and robustness of results, and testing in prospective studies and randomized controlled trials.
Collapse
Affiliation(s)
- Lea Marie Pehrson
- Department of Diagnostic Radiology, Copenhagen University Hospital Rigshospitalet, 2100 Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, 2100 Copenhagen, Denmark
- Department of Computer Science, University of Copenhagen, 2100 Copenhagen, Denmark
| | - Jens Petersen
- Department of Computer Science, University of Copenhagen, 2100 Copenhagen, Denmark
- Department of Oncology, Rigshospitalet, 2100 Copenhagen, Denmark
| | - Nathalie Sarup Panduro
- Department of Diagnostic Radiology, Copenhagen University Hospital Rigshospitalet, 2100 Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, 2100 Copenhagen, Denmark
| | - Carsten Ammitzbøl Lauridsen
- Department of Diagnostic Radiology, Copenhagen University Hospital Rigshospitalet, 2100 Copenhagen, Denmark
- Radiography Education, University College Copenhagen, 2200 Copenhagen, Denmark
| | - Jonathan Frederik Carlsen
- Department of Diagnostic Radiology, Copenhagen University Hospital Rigshospitalet, 2100 Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, 2100 Copenhagen, Denmark
| | - Sune Darkner
- Department of Computer Science, University of Copenhagen, 2100 Copenhagen, Denmark
| | - Michael Bachmann Nielsen
- Department of Diagnostic Radiology, Copenhagen University Hospital Rigshospitalet, 2100 Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, 2100 Copenhagen, Denmark
| | - Silvia Ingala
- Department of Diagnostic Radiology, Copenhagen University Hospital Rigshospitalet, 2100 Copenhagen, Denmark
- Cerebriu A/S, 1434 Copenhagen, Denmark
- Department of Diagnostic Radiology, Copenhagen University Hospital Herlev and Gentofte, 2730 Herlev, Denmark
| |
Collapse
|
2
|
Jeong J, Ham S, Seo BK, Lee JT, Wang S, Bae MS, Cho KR, Woo OH, Song SE, Choi H. Superior performance in classification of breast cancer molecular subtype and histological factors by radiomics based on ultrafast MRI over standard MRI: evidence from a prospective study. LA RADIOLOGIA MEDICA 2025; 130:368-380. [PMID: 39862364 PMCID: PMC11903601 DOI: 10.1007/s11547-025-01956-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Accepted: 01/09/2025] [Indexed: 01/27/2025]
Abstract
PURPOSE To compare the performance of ultrafast MRI with standard MRI in classifying histological factors and subtypes of invasive breast cancer among radiologists with varying experience. METHODS From October 2021 to November 2022, this prospective study enrolled 225 participants with 233 breast cancers before treatment (NCT06104189 at clinicaltrials.gov). Tumor segmentation on MRI was performed independently by two readers (R1, dedicated breast radiologist; R2, radiology resident). We extracted 1618 radiomic features and four kinetic features from ultrafast and standard images, respectively. Logistic regression algorithms were adopted for prediction modeling, following feature selection by the least absolute shrinkage and selection operator. The performance of predicting histological factors and subtypes was evaluated using the area under the receiver-operating characteristic curve (AUC). Performance differences between MRI methods and radiologists were assessed using the DeLong test. RESULTS Ultrafast MRI outperformed standard MRI in predicting HER2 status (AUCs [95% CI] of ultrafast MRI vs standard MRI; 0.87 [0.83-0.91] vs 0.77 [0.64-0.90] for R1 and 0.88 [0.83-0.91] vs 0.77 [0.69-0.84] for R2) (all P < 0.05). Both ultrafast MRI and standard MRI showed comparable performance in predicting hormone receptors. Ultrafast MRI exhibited superior performance to standard MRI in classifying subtypes. The classification of the luminal subtype for both readers, the HER2-overexpressed subtype for R2, and the triple-negative subtype for R1 was significantly better with ultrafast MRI (P < 0.05). CONCLUSION Ultrafast MRI-based radiomics holds promise as a noninvasive imaging biomarker for classifying hormone receptors, HER2 status, and molecular subtypes compared to standard MRI, regardless of radiologist experience.
Collapse
Affiliation(s)
- Juhyun Jeong
- Department of Radiology, Korea University Ansan Hospital, Korea University College of Medicine, 123 Jeokgeum-Ro, Danwon-Gu, Ansan City, 15355, Gyeonggi-Do, Korea
| | - Sungwon Ham
- Healthcare Readiness Institute for Unified Korea, Korea University Ansan Hospital, Korea University College of Medicine, Ansan, Republic of Korea
| | - Bo Kyoung Seo
- Department of Radiology, Korea University Ansan Hospital, Korea University College of Medicine, 123 Jeokgeum-Ro, Danwon-Gu, Ansan City, 15355, Gyeonggi-Do, Korea.
| | - Jeong Taek Lee
- Department of Radiology, Korea University Ansan Hospital, Korea University College of Medicine, 123 Jeokgeum-Ro, Danwon-Gu, Ansan City, 15355, Gyeonggi-Do, Korea
| | - Shuncong Wang
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Center, Amsterdam, The Netherlands
| | - Min Sun Bae
- Department of Radiology, Korea University Ansan Hospital, Korea University College of Medicine, 123 Jeokgeum-Ro, Danwon-Gu, Ansan City, 15355, Gyeonggi-Do, Korea
| | - Kyu Ran Cho
- Department of Radiology, Korea University Anam Hospital, Korea University College of Medicine, Seoul, Republic of Korea
| | - Ok Hee Woo
- Department of Radiology, Korea University Guro Hospital, Korea University College of Medicine, Seoul, Republic of Korea
| | - Sung Eun Song
- Department of Radiology, Korea University Anam Hospital, Korea University College of Medicine, Seoul, Republic of Korea
| | - Hangseok Choi
- Medical Science Research Center, Korea University College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
3
|
Chen D, Yang X, Qin S, Li X, Dai J, Tang Y, Men K. Efficient strategy for magnetic resonance image-guided adaptive radiotherapy of rectal cancer using a library of reference plans. Phys Imaging Radiat Oncol 2025; 33:100747. [PMID: 40123773 PMCID: PMC11926541 DOI: 10.1016/j.phro.2025.100747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2024] [Revised: 02/27/2025] [Accepted: 02/28/2025] [Indexed: 03/25/2025] Open
Abstract
Background and purpose Adaptive radiotherapy for patients with rectal cancer using a magnetic resonance-guided linear accelerator has limitations in managing bladder shape variations. Conventional couch shifts may result in missing the target while requiring a large margin. Conversely, fully adaptive strategy is time-consuming. Therefore, a more efficient strategy for online adaptive radiotherapy is required. Materials and methods This retrospective study included 50 fractions from 10 patients with rectal cancer undergoing preoperative radiotherapy. The proposed method involved preparing a library of reference plans (LoRP) based on diverse bladder shapes. For each fraction, a plan from the LoRP was selected based on daily bladder filling. This plan was compared with those generated by conventional couch shift and fully adaptive strategies. The clinical acceptability of the plans (i.e., per protocol, variation-acceptable, or unacceptable) was assessed. Results In per protocol criterion, 44 %, 6 %, and 100 % of the plans for LoRP, conventional couch shift, and fully adaptive strategies were achieved, respectively. In variation-acceptable criterion, 92 % of LoRP plans and 74 % of conventional couch shift plans were achieved. LoRP demonstrated 94 % target coverage (100 % prescription dose) in the fully adaptive strategy compared with 91 % in conventional couch shift strategy. The fully adaptive strategy had the best performance in sparing the intestine and colon. LoRP reduced the treatment session duration by more than a third (>20 min) compared with the fully adaptive strategy. Conclusion LoRP achieved adequate target coverage with a short treatment session duration, potentially increasing treatment efficiency and improving patient comfort.
Collapse
Affiliation(s)
- Deqi Chen
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Xiongtao Yang
- Department of Oncology, Beijing Changping Hospital, Beijing 102202, China
| | - Shirui Qin
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Xiufen Li
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Jianrong Dai
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Yuan Tang
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Kuo Men
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| |
Collapse
|
4
|
Zhang Z, Han J, Ji W, Lou H, Li Z, Hu Y, Wang M, Qi B, Liu S. Improved deep learning for automatic localisation and segmentation of rectal cancer on T2-weighted MRI. J Med Radiat Sci 2024; 71:509-518. [PMID: 38654675 PMCID: PMC11638361 DOI: 10.1002/jmrs.794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Accepted: 04/09/2024] [Indexed: 04/26/2024] Open
Abstract
INTRODUCTION The automatic segmentation approaches of rectal cancer from magnetic resonance imaging (MRI) are very valuable to relieve physicians from heavy workloads and enhance working efficiency. This study aimed to compare the segmentation accuracy of a proposed model with the other three models and the inter-observer consistency. METHODS A total of 65 patients with rectal cancer who underwent MRI examination were enrolled in our cohort and were randomly divided into a training cohort (n = 45) and a validation cohort (n = 20). Two experienced radiologists independently segmented rectal cancer lesions. A novel segmentation model (AttSEResUNet) was trained on T2WI based on ResUNet and attention mechanisms. The segmentation performance of the AttSEResUNet, U-Net, ResUNet and U-Net with Attention Gate (AttUNet) was compared, using Dice similarity coefficient (DSC), Hausdorff distance (HD), mean distance to agreement (MDA) and Jaccard index. The segmentation variability of automatic segmentation models and inter-observer was also evaluated. RESULTS The AttSEResUNet with post-processing showed perfect lesion recognition rate (100%) and false recognition rate (0), and its evaluation metrics outperformed other three models for two independent readers (observer 1: DSC = 0.839 ± 0.112, HD = 9.55 ± 6.68, MDA = 0.556 ± 0.722, Jaccard index = 0.736 ± 0.150; observer 2: DSC = 0.856 ± 0.099, HD = 11.0 ± 10.1, MDA = 0.789 ± 1.07, Jaccard index = 0.673 ± 0.130). The segmentation performance of AttSEResUNet was comparable and similar to manual variability (DSC = 0.857 ± 0.115, HD = 10.0 ± 10.0, MDA = 0.704 ± 1.17, Jaccard index = 0.666 ± 0.139). CONCLUSION Comparing with other three models, the proposed AttSEResUNet model was demonstrated as a more accurate model for contouring the rectal tumours in axial T2WI images, whose variability was similar to that of inter-observer.
Collapse
Affiliation(s)
- Zaixian Zhang
- Department of RadiologyThe Affiliated Hospital of Qingdao UniversityQingdaoChina
| | - Junqi Han
- Department of RadiologyThe Affiliated Hospital of Qingdao UniversityQingdaoChina
| | - Weina Ji
- Department of RadiologyThe Affiliated Hospital of Qingdao UniversityQingdaoChina
| | - Henan Lou
- Department of RadiologyThe Affiliated Hospital of Qingdao UniversityQingdaoChina
| | - Zhiming Li
- Department of RadiologyThe Affiliated Hospital of Qingdao UniversityQingdaoChina
| | - Yabin Hu
- Department of RadiologyThe Affiliated Hospital of Qingdao UniversityQingdaoChina
| | - Mingjia Wang
- College of Automation and Electronic EngineeringQingdao University of Science and TechnologyQingdaoChina
| | - Baozhu Qi
- College of Automation and Electronic EngineeringQingdao University of Science and TechnologyQingdaoChina
| | - Shunli Liu
- Department of RadiologyThe Affiliated Hospital of Qingdao UniversityQingdaoChina
| |
Collapse
|
5
|
Ma T, Wang J, Ma F, Shi J, Li Z, Cui J, Wu G, Zhao G, An Q. Visualization analysis of research hotspots and trends in MRI-based artificial intelligence in rectal cancer. Heliyon 2024; 10:e38927. [PMID: 39524896 PMCID: PMC11544045 DOI: 10.1016/j.heliyon.2024.e38927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2024] [Revised: 10/01/2024] [Accepted: 10/02/2024] [Indexed: 11/16/2024] Open
Abstract
Background Rectal cancer (RC) is one of the most common types of cancer worldwide. With the development of artificial intelligence (AI), the application of AI in preoperative evaluation and follow-up treatment of RC based on magnetic resonance imaging (MRI) has been the focus of research in this field. This review was conducted to develop comprehensive insight into the current research progress, hotspots, and future trends in AI based on MRI in RC, which remains to be studied. Methods Literature related to AI based on MRI and RC, as of November 2023, was obtained from the Web of Science Core Collection database. Visualization and bibliometric analyses of publication quantity and content were conducted to explore temporal trends, spatial distribution, collaborative networks, influential articles, keyword co-occurrence, and research directions. Results A total of 177 papers (152 original articles and 25 reviews) were identified from 24 countries/regions, 351 institutions, and 81 journals. Since 2019, the number of studies on this topic has rapidly increased. China and the United States have contributed the highest number of publications and institutions, cultivating the most intimate collaborative relationship. The highest number of articles derive from Sun Yat-sen University, while Frontiers in Oncology has published the highest number of relevant articles. Research on MRI-based AI in this field has mainly focused on preoperative diagnosis and prediction of treatment efficacy and prognosis. Conclusions This study provides an objective and comprehensive overview of the publications on MRI-based AI in RC and identifies the present research landscape, hotspots, and prospective trends in this field, which can provide valuable guidance for scholars worldwide.
Collapse
Affiliation(s)
- Tianming Ma
- Department of General Surgery, Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Jiawen Wang
- Department of Urology, Shengli Clinical Medical College of Fujian Medical University, Fujian Provincial Hospital, Fuzhou University Affiliated Provincial Hospital, Fuzhou, 350001, China
| | - Fuhai Ma
- Department of General Surgery, Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Jinxin Shi
- Department of General Surgery, Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Zijian Li
- Department of General Surgery, Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Jian Cui
- Department of General Surgery, Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Guoju Wu
- Department of General Surgery, Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Gang Zhao
- Department of General Surgery, Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Qi An
- Department of General Surgery, Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, 100730, China
| |
Collapse
|
6
|
Kensen CM, Simões R, Betgen A, Wiersema L, Lambregts DM, Peters FP, Marijnen CA, van der Heide UA, Janssen TM. Incorporating patient-specific information for the development of rectal tumor auto-segmentation models for online adaptive magnetic resonance Image-guided radiotherapy. Phys Imaging Radiat Oncol 2024; 32:100648. [PMID: 39319094 PMCID: PMC11421252 DOI: 10.1016/j.phro.2024.100648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 08/29/2024] [Accepted: 09/11/2024] [Indexed: 09/26/2024] Open
Abstract
Background and purpose In online adaptive magnetic resonance image (MRI)-guided radiotherapy (MRIgRT), manual contouring of rectal tumors on daily images is labor-intensive and time-consuming. Automation of this task is complex due to substantial variation in tumor shape and location between patients. The aim of this work was to investigate different approaches of propagating patient-specific prior information to the online adaptive treatment fractions to improve deep-learning based auto-segmentation of rectal tumors. Materials and methods 243 T2-weighted MRI scans of 49 rectal cancer patients treated on the 1.5T MR-Linear accelerator (MR-Linac) were utilized to train models to segment rectal tumors. As benchmark, an MRI_only auto-segmentation model was trained. Three approaches of including a patient-specific prior were studied: 1. include the segmentations of fraction 1 as extra input channel for the auto-segmentation of subsequent fractions, 2. fine-tuning of the MRI_only model to fraction 1 (PSF_1) and 3. fine-tuning of the MRI_only model on all earlier fractions (PSF_cumulative). Auto-segmentations were compared to the manual segmentation using geometric similarity metrics. Clinical impact was assessed by evaluating post-treatment target coverage. Results All patient-specific methods outperformed the MRI_only segmentation approach. Median 95th percentile Hausdorff (95HD) were 22.0 (range: 6.1-76.6) mm for MRI_only segmentation, 9.9 (range: 2.5-38.2) mm for MRI+prior segmentation, 6.4 (range: 2.4-17.8) mm for PSF_1 and 4.8 (range: 1.7-26.9) mm for PSF_cumulative. PSF_cumulative was found to be superior to PSF_1 from fraction 4 onward (p = 0.014). Conclusion Patient-specific fine-tuning of automatically segmented rectal tumors, using images and segmentations from all previous fractions, yields superior quality compared to other auto-segmentation approaches.
Collapse
Affiliation(s)
- Chavelli M. Kensen
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Rita Simões
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Anja Betgen
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Lisa Wiersema
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Doenja M.J. Lambregts
- Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Femke P. Peters
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Corrie A.M. Marijnen
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Uulke A. van der Heide
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Tomas M. Janssen
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| |
Collapse
|
7
|
Noble DJ, Ramaesh R, Brothwell M, Elumalai T, Barrett T, Stillie A, Paterson C, Ajithkumar T. The Evolving Role of Novel Imaging Techniques for Radiotherapy Planning. Clin Oncol (R Coll Radiol) 2024; 36:514-526. [PMID: 38937188 DOI: 10.1016/j.clon.2024.05.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2024] [Revised: 05/20/2024] [Accepted: 05/30/2024] [Indexed: 06/29/2024]
Abstract
The ability to visualise cancer with imaging has been crucial to the evolution of modern radiotherapy (RT) planning and delivery. And as evolving RT technologies deliver increasingly precise treatment, the importance of accurate identification and delineation of disease assumes ever greater significance. However, innovation in imaging technology has matched that seen with RT delivery platforms, and novel imaging techniques are a focus of much research activity. How these imaging modalities may alter and improve the diagnosis and staging of cancer is an important question, but already well served by the literature. What is less clear is how novel imaging techniques may influence and improve practical and technical aspects of RT planning and delivery. In this review, current gold standard approaches to integration of imaging, and potential future applications of bleeding-edge imaging technology into RT planning pathways are explored.
Collapse
Affiliation(s)
- D J Noble
- Department of Clinical Oncology, Edinburgh Cancer Centre, Western General Hospital, Crewe Road South, Edinburgh EH4 2XU, UK; Edinburgh Cancer Research Centre, Institute of Genetics and Cancer, The University of Edinburgh, Edinburgh, UK.
| | - R Ramaesh
- Department of Radiology, Western General Hospital, Edinburgh, UK
| | - M Brothwell
- Department of Clinical Oncology, University College London Hospitals, London, UK
| | - T Elumalai
- Department of Oncology, Cambridge University Hospitals NHS Foundation Trust, Addenbrooke's Hospital, Cambridge, UK
| | - T Barrett
- Department of Radiology, Cambridge University Hospitals NHS Foundation Trust, Addenbrooke's Hospital, Cambridge, UK
| | - A Stillie
- Department of Clinical Oncology, Edinburgh Cancer Centre, Western General Hospital, Crewe Road South, Edinburgh EH4 2XU, UK
| | - C Paterson
- Beatson West of Scotland Cancer Centre, Great Western Road, Glasgow G12 0YN, UK
| | - T Ajithkumar
- Department of Oncology, Cambridge University Hospitals NHS Foundation Trust, Addenbrooke's Hospital, Cambridge, UK
| |
Collapse
|
8
|
Bangolo A, Wadhwani N, Nagesh VK, Dey S, Tran HHV, Aguilar IK, Auda A, Sidiqui A, Menon A, Daoud D, Liu J, Pulipaka SP, George B, Furman F, Khan N, Plumptre A, Sekhon I, Lo A, Weissman S. Impact of artificial intelligence in the management of esophageal, gastric and colorectal malignancies. Artif Intell Gastrointest Endosc 2024; 5:90704. [DOI: 10.37126/aige.v5.i2.90704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 01/28/2024] [Accepted: 03/04/2024] [Indexed: 05/11/2024] Open
Abstract
The incidence of gastrointestinal malignancies has increased over the past decade at an alarming rate. Colorectal and gastric cancers are the third and fifth most commonly diagnosed cancers worldwide but are cited as the second and third leading causes of mortality. Early institution of appropriate therapy from timely diagnosis can optimize patient outcomes. Artificial intelligence (AI)-assisted diagnostic, prognostic, and therapeutic tools can assist in expeditious diagnosis, treatment planning/response prediction, and post-surgical prognostication. AI can intercept neoplastic lesions in their primordial stages, accurately flag suspicious and/or inconspicuous lesions with greater accuracy on radiologic, histopathological, and/or endoscopic analyses, and eliminate over-dependence on clinicians. AI-based models have shown to be on par, and sometimes even outperformed experienced gastroenterologists and radiologists. Convolutional neural networks (state-of-the-art deep learning models) are powerful computational models, invaluable to the field of precision oncology. These models not only reliably classify images, but also accurately predict response to chemotherapy, tumor recurrence, metastasis, and survival rates post-treatment. In this systematic review, we analyze the available evidence about the diagnostic, prognostic, and therapeutic utility of artificial intelligence in gastrointestinal oncology.
Collapse
Affiliation(s)
- Ayrton Bangolo
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Nikita Wadhwani
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Vignesh K Nagesh
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Shraboni Dey
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Hadrian Hoang-Vu Tran
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Izage Kianifar Aguilar
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Auda Auda
- Department of Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Aman Sidiqui
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Aiswarya Menon
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Deborah Daoud
- Department of Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - James Liu
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Sai Priyanka Pulipaka
- Department of Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Blessy George
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Flor Furman
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Nareeman Khan
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Adewale Plumptre
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Imranjot Sekhon
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Abraham Lo
- Department of Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Simcha Weissman
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| |
Collapse
|
9
|
Ferreira Silvério N, van den Wollenberg W, Betgen A, Wiersema L, Marijnen C, Peters F, van der Heide UA, Simões R, Janssen T. Evaluation of Deep Learning Clinical Target Volumes Auto-Contouring for Magnetic Resonance Imaging-Guided Online Adaptive Treatment of Rectal Cancer. Adv Radiat Oncol 2024; 9:101483. [PMID: 38706833 PMCID: PMC11066509 DOI: 10.1016/j.adro.2024.101483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 02/11/2024] [Indexed: 05/07/2024] Open
Abstract
Purpose Segmentation of clinical target volumes (CTV) on medical images can be time-consuming and is prone to interobserver variation (IOV). This is a problem for online adaptive radiation therapy, where CTV segmentation must be performed every treatment fraction, leading to longer treatment times and logistic challenges. Deep learning (DL)-based auto-contouring has the potential to speed up CTV contouring, but its current clinical use is limited. One reason for this is that it can be time-consuming to verify the accuracy of CTV contours produced using auto-contouring, and there is a risk of bias being introduced. To be accepted by clinicians, auto-contouring must be trustworthy. Therefore, there is a need for a comprehensive commissioning framework when introducing DL-based auto-contouring in clinical practice. We present such a framework and apply it to an in-house developed DL model for auto-contouring of the CTV in rectal cancer patients treated with MRI-guided online adaptive radiation therapy. Methods and Materials The framework for evaluating DL-based auto-contouring consisted of 3 steps: (1) Quantitative evaluation of the model's performance and comparison with IOV; (2) Expert observations and corrections; and (3) Evaluation of the impact on expected volumetric target coverage. These steps were performed on independent data sets. The framework was applied to an in-house trained nnU-Net model, using the data of 44 rectal cancer patients treated at our institution. Results The framework established that the model's performance after expert corrections was comparable to IOV, and although the model introduced a bias, this had no relevant impact on clinical practice. Additionally, we found a substantial time gain without reducing quality as determined by volumetric target coverage. Conclusions Our framework provides a comprehensive evaluation of the performance and clinical usability of target auto-contouring models. Based on the results, we conclude that the model is eligible for clinical use.
Collapse
Affiliation(s)
| | | | - Anja Betgen
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Lisa Wiersema
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Corrie Marijnen
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Femke Peters
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Uulke A. van der Heide
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Rita Simões
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Tomas Janssen
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| |
Collapse
|
10
|
Kim M, Park T, Oh BY, Kim MJ, Cho BJ, Son IT. Performance reporting design in artificial intelligence studies using image-based TNM staging and prognostic parameters in rectal cancer: a systematic review. Ann Coloproctol 2024; 40:13-26. [PMID: 38414120 PMCID: PMC10915525 DOI: 10.3393/ac.2023.00892.0127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Revised: 01/15/2024] [Accepted: 01/16/2024] [Indexed: 02/29/2024] Open
Abstract
PURPOSE The integration of artificial intelligence (AI) and magnetic resonance imaging in rectal cancer has the potential to enhance diagnostic accuracy by identifying subtle patterns and aiding tumor delineation and lymph node assessment. According to our systematic review focusing on convolutional neural networks, AI-driven tumor staging and the prediction of treatment response facilitate tailored treat-ment strategies for patients with rectal cancer. METHODS This paper summarizes the current landscape of AI in the imaging field of rectal cancer, emphasizing the performance reporting design based on the quality of the dataset, model performance, and external validation. RESULTS AI-driven tumor segmentation has demonstrated promising results using various convolutional neural network models. AI-based predictions of staging and treatment response have exhibited potential as auxiliary tools for personalized treatment strategies. Some studies have indicated superior performance than conventional models in predicting microsatellite instability and KRAS status, offer-ing noninvasive and cost-effective alternatives for identifying genetic mutations. CONCLUSION Image-based AI studies for rectal can-cer have shown acceptable diagnostic performance but face several challenges, including limited dataset sizes with standardized data, the need for multicenter studies, and the absence of oncologic relevance and external validation for clinical implantation. Overcoming these pitfalls and hurdles is essential for the feasible integration of AI models in clinical settings for rectal cancer, warranting further research.
Collapse
Affiliation(s)
- Minsung Kim
- Department of Surgery, Hallym University Sacred Heart Hospital, Hallym University College of Medicine, Anyang, Korea
| | - Taeyong Park
- Medical Artificial Intelligence Center, Hallym University Medical Center, Anyang, Korea
| | - Bo Young Oh
- Department of Surgery, Hallym University Sacred Heart Hospital, Hallym University College of Medicine, Anyang, Korea
| | - Min Jeong Kim
- Department of Radiology, Hallym University Sacred Heart Hospital, Hallym University College of Medicine, Anyang, Korea
| | - Bum-Joo Cho
- Medical Artificial Intelligence Center, Hallym University Medical Center, Anyang, Korea
| | - Il Tae Son
- Department of Surgery, Hallym University Sacred Heart Hospital, Hallym University College of Medicine, Anyang, Korea
| |
Collapse
|
11
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RLJ, Liu T, Wang T, Yang X. Deep learning in MRI-guided radiation therapy: A systematic review. J Appl Clin Med Phys 2024; 25:e14155. [PMID: 37712893 PMCID: PMC10860468 DOI: 10.1002/acm2.14155] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 05/10/2023] [Accepted: 08/21/2023] [Indexed: 09/16/2023] Open
Abstract
Recent advances in MRI-guided radiation therapy (MRgRT) and deep learning techniques encourage fully adaptive radiation therapy (ART), real-time MRI monitoring, and the MRI-only treatment planning workflow. Given the rapid growth and emergence of new state-of-the-art methods in these fields, we systematically review 197 studies written on or before December 31, 2022, and categorize the studies into the areas of image segmentation, image synthesis, radiomics, and real time MRI. Building from the underlying deep learning methods, we discuss their clinical importance and current challenges in facilitating small tumor segmentation, accurate x-ray attenuation information from MRI, tumor characterization and prognosis, and tumor motion tracking. In particular, we highlight the recent trends in deep learning such as the emergence of multi-modal, visual transformer, and diffusion models.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Richard L. J. Qiu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Tian Liu
- Department of Radiation OncologyIcahn School of Medicine at Mount SinaiNew YorkNew YorkUSA
| | - Tonghe Wang
- Department of Medical PhysicsMemorial Sloan Kettering Cancer CenterNew YorkNew YorkUSA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| |
Collapse
|
12
|
Bibault JE, Giraud P. Deep learning for automated segmentation in radiotherapy: a narrative review. Br J Radiol 2024; 97:13-20. [PMID: 38263838 PMCID: PMC11027240 DOI: 10.1093/bjr/tqad018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 08/10/2023] [Accepted: 10/27/2023] [Indexed: 01/25/2024] Open
Abstract
The segmentation of organs and structures is a critical component of radiation therapy planning, with manual segmentation being a laborious and time-consuming task. Interobserver variability can also impact the outcomes of radiation therapy. Deep neural networks have recently gained attention for their ability to automate segmentation tasks, with convolutional neural networks (CNNs) being a popular approach. This article provides a descriptive review of the literature on deep learning (DL) techniques for segmentation in radiation therapy planning. This review focuses on five clinical sub-sites and finds that U-net is the most commonly used CNN architecture. The studies using DL for image segmentation were included in brain, head and neck, lung, abdominal, and pelvic cancers. The majority of DL segmentation articles in radiation therapy planning have concentrated on normal tissue structures. N-fold cross-validation was commonly employed, without external validation. This research area is expanding quickly, and standardization of metrics and independent validation are critical to benchmarking and comparing proposed methods.
Collapse
Affiliation(s)
- Jean-Emmanuel Bibault
- Radiation Oncology Department, Georges Pompidou European Hospital, Assistance Publique—Hôpitaux de Paris, Université de Paris Cité, Paris, 75015, France
- INSERM UMR 1138, Centre de Recherche des Cordeliers, Paris, 75006, France
| | - Paul Giraud
- INSERM UMR 1138, Centre de Recherche des Cordeliers, Paris, 75006, France
- Radiation Oncology Department, Pitié Salpêtrière Hospital, Assistance Publique—Hôpitaux de Paris, Paris Sorbonne Universités, Paris, 75013, France
| |
Collapse
|
13
|
Li D, Wang J, Yang J, Zhao J, Yang X, Cui Y, Zhang K. RTAU-Net: A novel 3D rectal tumor segmentation model based on dual path fusion and attentional guidance. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 242:107842. [PMID: 37832426 DOI: 10.1016/j.cmpb.2023.107842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 09/18/2023] [Accepted: 10/01/2023] [Indexed: 10/15/2023]
Abstract
BACKGROUND AND OBJECTIVE According to the Global Cancer Statistics 2020, colorectal cancer has the third-highest diagnosis rate (10.0 %) and the second-highest mortality rate (9.4 %) among the 36 types. Rectal cancer accounts for a large proportion of colorectal cancer. The size and shape of the rectal tumor can directly affect the diagnosis and treatment by doctors. The existing rectal tumor segmentation methods are based on two-dimensional slices, which cannot analyze a patient's tumor as a whole and lose the correlation between slices of MRI image, so the practical application value is not high. METHODS In this paper, a three-dimensional rectal tumor segmentation model is proposed. Firstly, image preprocessing is performed to reduce the effect caused by the unbalanced proportion of background region and target region, and improve the quality of the image. Secondly, a dual-path fusion network is designed to extract both global features and local detail features of rectal tumors. The network includes two encoders, a residual encoder for enhancing the spatial detail information and feature representation of the tumor and a transformer encoder for extracting global contour information of the tumor. In the decoding stage, we merge the information extracted from the dual paths and decode them. In addition, for the problem of the complex morphology and different sizes of rectal tumors, a multi-scale fusion channel attention mechanism is designed, which can capture important contextual information of different scales. Finally, visualize the 3D rectal tumor segmentation results. RESULTS The RTAU-Net is evaluated on the data set provided by Shanxi Provincial Cancer Hospital and Xinhua Hospital. The experimental results showed that the Dice of tumor segmentation reached 0.7978 and 0.6792, respectively, which improved by 2.78 % and 7.02 % compared with suboptimal model. CONCLUSIONS Although the morphology of rectal tumors varies, RTAU-Net can precisely localize rectal tumors and learn the contour and details of tumors, which can relieve physicians' workload and improve diagnostic accuracy.
Collapse
Affiliation(s)
- Dengao Li
- College of Data Science, Taiyuan University of Technology, Taiyuan 030024, China; Key Laboratory of Big Data Fusion Analysis and Application of Shanxi Province, Taiyuan University of Technology, Taiyuan, Shanxi, China; Intelligent Perception Engineering Technology Center of Shanxi, Taiyuan University of Technology, Taiyuan, Shanxi, China.
| | - Juan Wang
- College of Data Science, Taiyuan University of Technology, Taiyuan 030024, China; Key Laboratory of Big Data Fusion Analysis and Application of Shanxi Province, Taiyuan University of Technology, Taiyuan, Shanxi, China; Intelligent Perception Engineering Technology Center of Shanxi, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Jicheng Yang
- Computer technology, Ocean University of China, Qingdao 266100, China
| | - Jumin Zhao
- College of Information and Computer, Taiyuan University of Technology, Taiyuan 030024, China; Key Laboratory of Big Data Fusion Analysis and Application of Shanxi Province, Taiyuan University of Technology, Taiyuan, Shanxi, China; Intelligent Perception Engineering Technology Center of Shanxi, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Xiaotang Yang
- Department of Radiology, Shanxi Province Cancer Hospital, Shanxi Medical University, Taiyuan 030013, China
| | - Yanfen Cui
- Department of Radiology, Shanxi Province Cancer Hospital, Shanxi Medical University, Taiyuan 030013, China
| | - Kenan Zhang
- College of Data Science, Taiyuan University of Technology, Taiyuan 030024, China; Key Laboratory of Big Data Fusion Analysis and Application of Shanxi Province, Taiyuan University of Technology, Taiyuan, Shanxi, China; Intelligent Perception Engineering Technology Center of Shanxi, Taiyuan University of Technology, Taiyuan, Shanxi, China
| |
Collapse
|
14
|
Luan S, Wei C, Ding Y, Xue X, Wei W, Yu X, Wang X, Ma C, Zhu B. PCG-net: feature adaptive deep learning for automated head and neck organs-at-risk segmentation. Front Oncol 2023; 13:1177788. [PMID: 37927463 PMCID: PMC10623055 DOI: 10.3389/fonc.2023.1177788] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 10/03/2023] [Indexed: 11/07/2023] Open
Abstract
Introduction Radiation therapy is a common treatment option for Head and Neck Cancer (HNC), where the accurate segmentation of Head and Neck (HN) Organs-AtRisks (OARs) is critical for effective treatment planning. Manual labeling of HN OARs is time-consuming and subjective. Therefore, deep learning segmentation methods have been widely used. However, it is still a challenging task for HN OARs segmentation due to some small-sized OARs such as optic chiasm and optic nerve. Methods To address this challenge, we propose a parallel network architecture called PCG-Net, which incorporates both convolutional neural networks (CNN) and a Gate-Axial-Transformer (GAT) to effectively capture local information and global context. Additionally, we employ a cascade graph module (CGM) to enhance feature fusion through message-passing functions and information aggregation strategies. We conducted extensive experiments to evaluate the effectiveness of PCG-Net and its robustness in three different downstream tasks. Results The results show that PCG-Net outperforms other methods, improves the accuracy of HN OARs segmentation, which can potentially improve treatment planning for HNC patients. Discussion In summary, the PCG-Net model effectively establishes the dependency between local information and global context and employs CGM to enhance feature fusion for accurate segment HN OARs. The results demonstrate the superiority of PCGNet over other methods, making it a promising approach for HNC treatment planning.
Collapse
Affiliation(s)
- Shunyao Luan
- School of Integrated Circuit, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
| | - Changchao Wei
- Key Laboratory of Artificial Micro and Nano-structures of Ministry of Education, Center for Theoretical Physics, School of Physics and Technology, Wuhan University, Wuhan, China
| | - Yi Ding
- Department of Radiation Oncology, Hubei Cancer Hospital, TongJi Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Xudong Xue
- Department of Radiation Oncology, Hubei Cancer Hospital, TongJi Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Wei Wei
- Department of Radiation Oncology, Hubei Cancer Hospital, TongJi Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Xiao Yu
- Department of Radiation Oncology, The First Affiliated Hospital of University of Science and Technology of China, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Xiao Wang
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Rutgers-Robert Wood Johnson Medical School, New Brunswick, NJ, United States
| | - Chi Ma
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Rutgers-Robert Wood Johnson Medical School, New Brunswick, NJ, United States
| | - Benpeng Zhu
- School of Integrated Circuit, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
15
|
Geng J, Zhu X, Liu Z, Chen Q, Bai L, Wang S, Li Y, Wu H, Yue H, Du Y. Towards deep-learning (DL) based fully automated target delineation for rectal cancer neoadjuvant radiotherapy using a divide-and-conquer strategy: a study with multicenter blind and randomized validation. Radiat Oncol 2023; 18:164. [PMID: 37803462 PMCID: PMC10557242 DOI: 10.1186/s13014-023-02350-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Accepted: 09/13/2023] [Indexed: 10/08/2023] Open
Abstract
PURPOSE Manual clinical target volume (CTV) and gross tumor volume (GTV) delineation for rectal cancer neoadjuvant radiotherapy is pivotal but labor-intensive. This study aims to propose a deep learning (DL)-based workflow towards fully automated clinical target volume (CTV) and gross tumor volume (GTV) delineation for rectal cancer neoadjuvant radiotherapy. MATERIALS & METHODS We retrospectively included 141 patients with Stage II-III mid-low rectal cancer and randomly grouped them into training (n = 121) and testing (n = 20) cohorts. We adopted a divide-and-conquer strategy to address CTV and GTV segmentation using two separate DL models with DpuUnet as backend-one model for CTV segmentation in the CT domain, and the other for GTV in the MRI domain. The workflow was validated using a three-level multicenter-involved blind and randomized evaluation scheme. Dice similarity coefficient (DSC) and 95th percentile Hausdorff distance (95HD) metrics were calculated in Level 1, four-grade expert scoring was performed in Level 2, and head-to-head Turing test in Level 3. RESULTS For the DL-based CTV contours over the testing cohort, the DSC and 95HD (mean ± SD) were 0.85 ± 0.06 and 7.75 ± 6.42 mm respectively, and 96.4% cases achieved clinical viable scores (≥ 2). The positive rate in the Turing test was 52.3%. For GTV, the DSC and 95HD were 0.87 ± 0.07 and 4.07 ± 1.67 mm respectively, and 100% of the DL-based contours achieved clinical viable scores (≥ 2). The positive rate in the Turing test was 52.0%. CONCLUSION The proposed DL-based workflow exhibited promising accuracy and excellent clinical viability towards automated CTV and GTV delineation for rectal cancer neoadjuvant radiotherapy.
Collapse
Affiliation(s)
- Jianhao Geng
- Key laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing, 100142, China
| | - Xianggao Zhu
- Key laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing, 100142, China
| | - Zhiyan Liu
- Key laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing, 100142, China
| | - Qi Chen
- Research and Development Department, MedMind Technology Co., Ltd, Beijing, 100083, China
| | - Lu Bai
- Research and Development Department, MedMind Technology Co., Ltd, Beijing, 100083, China
| | - Shaobin Wang
- Research and Development Department, MedMind Technology Co., Ltd, Beijing, 100083, China
| | - Yongheng Li
- Key laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing, 100142, China
| | - Hao Wu
- Key laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing, 100142, China
- Institute of Medical Technology, Peking University Health Science Center, Beijing, 100191, China
| | - Haizhen Yue
- Key laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing, 100142, China.
| | - Yi Du
- Key laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing, 100142, China.
- Institute of Medical Technology, Peking University Health Science Center, Beijing, 100191, China.
| |
Collapse
|
16
|
Mori S, Hirai R, Sakata Y, Tachibana Y, Koto M, Ishikawa H. Deep neural network-based synthetic image digital fluoroscopy using digitally reconstructed tomography. Phys Eng Sci Med 2023; 46:1227-1237. [PMID: 37349631 DOI: 10.1007/s13246-023-01290-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 06/16/2023] [Indexed: 06/24/2023]
Abstract
We developed a deep neural network (DNN) to generate X-ray flat panel detector (FPD) images from digitally reconstructed radiographic (DRR) images. FPD and treatment planning CT images were acquired from patients with prostate and head and neck (H&N) malignancies. The DNN parameters were optimized for FPD image synthesis. The synthetic FPD images' features were evaluated to compare to the corresponding ground-truth FPD images using mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM). The image quality of the synthetic FPD image was also compared with that of the DRR image to understand the performance of our DNN. For the prostate cases, the MAE of the synthetic FPD image was improved (= 0.12 ± 0.02) from that of the input DRR image (= 0.35 ± 0.08). The synthetic FPD image showed higher PSNRs (= 16.81 ± 1.54 dB) than those of the DRR image (= 8.74 ± 1.56 dB), while SSIMs for both images (= 0.69) were almost the same. All metrics for the synthetic FPD images of the H&N cases were improved (MAE 0.08 ± 0.03, PSNR 19.40 ± 2.83 dB, and SSIM 0.80 ± 0.04) compared to those for the DRR image (MAE 0.48 ± 0.11, PSNR 5.74 ± 1.63 dB, and SSIM 0.52 ± 0.09). Our DNN successfully generated FPD images from DRR images. This technique would be useful to increase throughput when images from two different modalities are compared by visual inspection.
Collapse
Affiliation(s)
- Shinichiro Mori
- National Institutes for Quantum Science and Technology, Quantum Life and Medical Science Directorate, Institute for Quantum Medical Science, Inage-ku, Chiba, 263-8555, Japan.
| | - Ryusuke Hirai
- Corporate Research and Development Center, Toshiba Corporation, Kanagawa, 212-8582, Japan
| | - Yukinobu Sakata
- Corporate Research and Development Center, Toshiba Corporation, Kanagawa, 212-8582, Japan
| | - Yasuhiko Tachibana
- National Institutes for Quantum Science and Technology, Quantum Life and Medical Science Directorate, Institute for Quantum Medical Science, Inage-ku, Chiba, 263-8555, Japan
| | - Masashi Koto
- QST hospital, National Institutes for Quantum Science and Technology, Inage-ku, Chiba, 263-8555, Japan
| | - Hitoshi Ishikawa
- QST hospital, National Institutes for Quantum Science and Technology, Inage-ku, Chiba, 263-8555, Japan
| |
Collapse
|
17
|
Poel R, Kamath AJ, Willmann J, Andratschke N, Ermiş E, Aebersold DM, Manser P, Reyes M. Deep-Learning-Based Dose Predictor for Glioblastoma-Assessing the Sensitivity and Robustness for Dose Awareness in Contouring. Cancers (Basel) 2023; 15:4226. [PMID: 37686501 PMCID: PMC10486555 DOI: 10.3390/cancers15174226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 08/16/2023] [Accepted: 08/21/2023] [Indexed: 09/10/2023] Open
Abstract
External beam radiation therapy requires a sophisticated and laborious planning procedure. To improve the efficiency and quality of this procedure, machine-learning models that predict these dose distributions were introduced. The most recent dose prediction models are based on deep-learning architectures called 3D U-Nets that give good approximations of the dose in 3D almost instantly. Our purpose was to train such a 3D dose prediction model for glioblastoma VMAT treatment and test its robustness and sensitivity for the purpose of quality assurance of automatic contouring. From a cohort of 125 glioblastoma (GBM) patients, VMAT plans were created according to a clinical protocol. The initial model was trained on a cascaded 3D U-Net. A total of 60 cases were used for training, 15 for validation and 20 for testing. The prediction model was tested for sensitivity to dose changes when subject to realistic contour variations. Additionally, the model was tested for robustness by exposing it to a worst-case test set containing out-of-distribution cases. The initially trained prediction model had a dose score of 0.94 Gy and a mean DVH (dose volume histograms) score for all structures of 1.95 Gy. In terms of sensitivity, the model was able to predict the dose changes that occurred due to the contour variations with a mean error of 1.38 Gy. We obtained a 3D VMAT dose prediction model for GBM with limited data, providing good sensitivity to realistic contour variations. We tested and improved the model's robustness by targeted updates to the training set, making it a useful technique for introducing dose awareness in the contouring evaluation and quality assurance process.
Collapse
Affiliation(s)
- Robert Poel
- Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, CH-3010 Bern, Switzerland
- ARTORG Center for Biomedical Research, University of Bern, CH-3010 Bern, Switzerland
| | - Amith J. Kamath
- ARTORG Center for Biomedical Research, University of Bern, CH-3010 Bern, Switzerland
| | - Jonas Willmann
- Department of Radiation Oncology, University Hospital Zurich, University of Zurich, CH-8091 Zurich, Switzerland
| | - Nicolaus Andratschke
- Department of Radiation Oncology, University Hospital Zurich, University of Zurich, CH-8091 Zurich, Switzerland
| | - Ekin Ermiş
- Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, CH-3010 Bern, Switzerland
| | - Daniel M. Aebersold
- Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, CH-3010 Bern, Switzerland
| | - Peter Manser
- Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, CH-3010 Bern, Switzerland
- Division of Medical Radiation Physics, Inselspital, Bern University Hospital, University of Bern, CH-3010 Bern, Switzerland
| | - Mauricio Reyes
- Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, CH-3010 Bern, Switzerland
- ARTORG Center for Biomedical Research, University of Bern, CH-3010 Bern, Switzerland
| |
Collapse
|
18
|
DeSilvio T, Antunes JT, Bera K, Chirra P, Le H, Liska D, Stein SL, Marderstein E, Hall W, Paspulati R, Gollamudi J, Purysko AS, Viswanath SE. Region-specific deep learning models for accurate segmentation of rectal structures on post-chemoradiation T2w MRI: a multi-institutional, multi-reader study. Front Med (Lausanne) 2023; 10:1149056. [PMID: 37250635 PMCID: PMC10213753 DOI: 10.3389/fmed.2023.1149056] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 03/27/2023] [Indexed: 05/31/2023] Open
Abstract
Introduction For locally advanced rectal cancers, in vivo radiological evaluation of tumor extent and regression after neoadjuvant therapy involves implicit visual identification of rectal structures on magnetic resonance imaging (MRI). Additionally, newer image-based, computational approaches (e.g., radiomics) require more detailed and precise annotations of regions such as the outer rectal wall, lumen, and perirectal fat. Manual annotations of these regions, however, are highly laborious and time-consuming as well as subject to inter-reader variability due to tissue boundaries being obscured by treatment-related changes (e.g., fibrosis, edema). Methods This study presents the application of U-Net deep learning models that have been uniquely developed with region-specific context to automatically segment each of the outer rectal wall, lumen, and perirectal fat regions on post-treatment, T2-weighted MRI scans. Results In multi-institutional evaluation, region-specific U-Nets (wall Dice = 0.920, lumen Dice = 0.895) were found to perform comparably to multiple readers (wall inter-reader Dice = 0.946, lumen inter-reader Dice = 0.873). Additionally, when compared to a multi-class U-Net, region-specific U-Nets yielded an average 20% improvement in Dice scores for segmenting each of the wall, lumen, and fat; even when tested on T2-weighted MRI scans that exhibited poorer image quality, or from a different plane, or were accrued from an external institution. Discussion Developing deep learning segmentation models with region-specific context may thus enable highly accurate, detailed annotations for multiple rectal structures on post-chemoradiation T2-weighted MRI scans, which is critical for improving evaluation of tumor extent in vivo and building accurate image-based analytic tools for rectal cancers.
Collapse
Affiliation(s)
- Thomas DeSilvio
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States
| | - Jacob T. Antunes
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States
| | - Kaustav Bera
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States
| | - Prathyush Chirra
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States
| | - Hoa Le
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States
| | - David Liska
- Department of Colorectal Surgery, Cleveland Clinic, Cleveland, OH, United States
| | - Sharon L. Stein
- Department of Surgery, University Hospitals Cleveland Medical Center, Cleveland, OH, United States
| | - Eric Marderstein
- Northeast Ohio Veterans Affairs Medical Center, Cleveland, OH, United States
| | - William Hall
- Department of Radiation Oncology and Surgery, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Rajmohan Paspulati
- Department of Diagnostic Imaging and Interventional Radiology, Moffitt Cancer Center, Tampa, FL, United States
| | | | - Andrei S. Purysko
- Section of Abdominal Imaging and Nuclear Radiology Department, Cleveland Clinic, Cleveland, OH, United States
| | - Satish E. Viswanath
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States
| |
Collapse
|
19
|
Wang J, Qu A, Wang Q, Zhao Q, Liu J, Wu Q. TT-Net: Tensorized Transformer Network for 3D medical image segmentation. Comput Med Imaging Graph 2023; 107:102234. [PMID: 37075619 DOI: 10.1016/j.compmedimag.2023.102234] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 02/09/2023] [Accepted: 03/24/2023] [Indexed: 04/21/2023]
Abstract
Accurate segmentation of organs, tissues and lesions is essential for computer-assisted diagnosis. Previous works have achieved success in the field of automatic segmentation. However, there exists two limitations. (1) They are remain challenged by complex conditions, such as segmentation target is variable in location, size and shape, especially for different imaging modalities. (2) Existing transformer-based networks suffer from a high parametric complexity. To solve these limitations, we propose a new Tensorized Transformer Network (TT-Net). In this paper, (1) Multi-scale transformer with layers-fusion is proposed to faithfully capture context interaction information. (2) Cross Shared Attention (CSA) module that based on pHash similarity fusion (pSF) is well-designed to extract the global multi-variate dependency features. (3) Tensorized Self-Attention (TSA) module is proposed to deal with the large number of parameters, which can also be easily embedded into other models. In addition, TT-Net gains a good explainability through visualizing the transformer layers. The proposed method is evaluated on three widely accepted public datasets and one clinical dataset, which contains different imaging modalities. Comprehensive results show that TT-Net outperforms other state-of-the-art methods for the four different segmentation tasks. Besides, the compression module which can be easily embedded into other transformer-based methods achieves lower computation with comparable segmentation performance.
Collapse
Affiliation(s)
- Jing Wang
- Shandong University, School of Information Science and Engineering, Qingdao 266237, China
| | - Aixi Qu
- Shandong University, School of Information Science and Engineering, Qingdao 266237, China
| | - Qing Wang
- QiLu Hospital of Shandong University, Radiology Department, Jinan 250012, China
| | - Qibin Zhao
- RIKEN Center for Advanced Intelligence Project, Japan
| | - Ju Liu
- Shandong University, School of Information Science and Engineering, Qingdao 266237, China; Shandong University, Institute of Brain and Brain-Inspired Science, Jinan 250012, China.
| | - Qiang Wu
- Shandong University, School of Information Science and Engineering, Qingdao 266237, China; Shandong University, Institute of Brain and Brain-Inspired Science, Jinan 250012, China.
| |
Collapse
|
20
|
Liu X, Li Z, Yin Y. Clinical application of MR-Linac in tumor radiotherapy: a systematic review. Radiat Oncol 2023; 18:52. [PMID: 36918884 PMCID: PMC10015924 DOI: 10.1186/s13014-023-02221-8] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 02/01/2023] [Indexed: 03/15/2023] Open
Abstract
Recent years have seen both a fresh knowledge of cancer and impressive advancements in its treatment. However, the clinical treatment paradigm of cancer is still difficult to implement in the twenty-first century due to the rise in its prevalence. Radiotherapy (RT) is a crucial component of cancer treatment that is helpful for almost all cancer types. The accuracy of RT dosage delivery is increasing as a result of the quick development of computer and imaging technology. The use of image-guided radiation (IGRT) has improved cancer outcomes and decreased toxicity. Online adaptive radiotherapy will be made possible by magnetic resonance imaging-guided radiotherapy (MRgRT) using a magnetic resonance linear accelerator (MR-Linac), which will enhance the visibility of malignancies. This review's objectives are to examine the benefits of MR-Linac as a treatment approach from the perspective of various cancer patients' prognoses and to suggest prospective development areas for additional study.
Collapse
Affiliation(s)
- Xin Liu
- Department of Oncology, Affiliated Hospital of Southwest Medical University, Luzhou, 646000, China.,Department of Radiation Physics, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, 250117, China
| | - Zhenjiang Li
- Department of Radiation Physics, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, 250117, China.
| | - Yong Yin
- Department of Oncology, Affiliated Hospital of Southwest Medical University, Luzhou, 646000, China. .,Department of Radiation Physics, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, 250117, China.
| |
Collapse
|
21
|
Ghezzo S, Mongardi S, Bezzi C, Samanes Gajate AM, Preza E, Gotuzzo I, Baldassi F, Jonghi-Lavarini L, Neri I, Russo T, Brembilla G, De Cobelli F, Scifo P, Mapelli P, Picchio M. External validation of a convolutional neural network for the automatic segmentation of intraprostatic tumor lesions on 68Ga-PSMA PET images. Front Med (Lausanne) 2023; 10:1133269. [PMID: 36910493 PMCID: PMC9995820 DOI: 10.3389/fmed.2023.1133269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Accepted: 02/07/2023] [Indexed: 02/25/2023] Open
Abstract
Introduction State of the art artificial intelligence (AI) models have the potential to become a "one-stop shop" to improve diagnosis and prognosis in several oncological settings. The external validation of AI models on independent cohorts is essential to evaluate their generalization ability, hence their potential utility in clinical practice. In this study we tested on a large, separate cohort a recently proposed state-of-the-art convolutional neural network for the automatic segmentation of intraprostatic cancer lesions on PSMA PET images. Methods Eighty-five biopsy proven prostate cancer patients who underwent 68Ga PSMA PET for staging purposes were enrolled in this study. Images were acquired with either fully hybrid PET/MRI (N = 46) or PET/CT (N = 39); all participants showed at least one intraprostatic pathological finding on PET images that was independently segmented by two Nuclear Medicine physicians. The trained model was available at https://gitlab.com/dejankostyszyn/prostate-gtv-segmentation and data processing has been done in agreement with the reference work. Results When compared to the manual contouring, the AI model yielded a median dice score = 0.74, therefore showing a moderately good performance. Results were robust to the modality used to acquire images (PET/CT or PET/MRI) and to the ground truth labels (no significant difference between the model's performance when compared to reader 1 or reader 2 manual contouring). Discussion In conclusion, this AI model could be used to automatically segment intraprostatic cancer lesions for research purposes, as instance to define the volume of interest for radiomics or deep learning analysis. However, more robust performance is needed for the generation of AI-based decision support technologies to be proposed in clinical practice.
Collapse
Affiliation(s)
- Samuele Ghezzo
- Department of Medicine and Surgery, Vita-Salute San Raffaele University, Milan, Italy.,Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Sofia Mongardi
- Department of Medicine and Surgery, Vita-Salute San Raffaele University, Milan, Italy
| | - Carolina Bezzi
- Department of Medicine and Surgery, Vita-Salute San Raffaele University, Milan, Italy.,Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | | | - Erik Preza
- Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Irene Gotuzzo
- School of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
| | - Francesco Baldassi
- School of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
| | | | - Ilaria Neri
- Department of Medicine and Surgery, Vita-Salute San Raffaele University, Milan, Italy.,Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Tommaso Russo
- Department of Medicine and Surgery, Vita-Salute San Raffaele University, Milan, Italy.,Department of Radiology, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Giorgio Brembilla
- Department of Medicine and Surgery, Vita-Salute San Raffaele University, Milan, Italy.,Department of Radiology, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Francesco De Cobelli
- Department of Medicine and Surgery, Vita-Salute San Raffaele University, Milan, Italy.,Department of Radiology, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Paola Scifo
- Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Paola Mapelli
- Department of Medicine and Surgery, Vita-Salute San Raffaele University, Milan, Italy.,Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Maria Picchio
- Department of Medicine and Surgery, Vita-Salute San Raffaele University, Milan, Italy.,Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Milan, Italy
| |
Collapse
|
22
|
Wong C, Fu Y, Li M, Mu S, Chu X, Fu J, Lin C, Zhang H. MRI-Based Artificial Intelligence in Rectal Cancer. J Magn Reson Imaging 2023; 57:45-56. [PMID: 35993550 DOI: 10.1002/jmri.28381] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Revised: 07/19/2022] [Accepted: 07/20/2022] [Indexed: 02/03/2023] Open
Abstract
Rectal cancer (RC) accounts for approximately one-third of colorectal cancer (CRC), with death rates increasing in patients younger than 50 years old. Magnetic resonance imaging (MRI) is routinely performed for tumor evaluation. However, the semantic features from images alone remain insufficient to guide treatment decisions. Functional MRIs are useful for revealing microstructural and functional abnormalities and nevertheless have low or modest repeatability and reproducibility. Therefore, during the preoperative evaluation and follow-up treatment of patients with RC, novel noninvasive imaging markers are needed to describe tumor characteristics to guide treatment strategies and achieve individualized diagnosis and treatment. In recent years, the development of artificial intelligence (AI) has created new tools for RC evaluation based on MRI. In this review, we summarize the research progress of AI in the evaluation of staging, prediction of high-risk factors, genotyping, response to therapy, recurrence, metastasis, prognosis, and segmentation with RC. We further discuss the challenges of clinical application, including improvement in imaging, model performance, and the biological meaning of features, which may also be major development directions in the future. EVIDENCE LEVEL: 5 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Chinting Wong
- Department of Nuclear Medicine, The First Hospital of Jilin University, Changchun, China
| | - Yu Fu
- Department of Radiology, The First Hospital of Jilin University, Jilin Provincial Key Laboratory of Medical Imaging and Big Data, Changchun, China
| | - Mingyang Li
- Department of Radiology, The First Hospital of Jilin University, Jilin Provincial Key Laboratory of Medical Imaging and Big Data, Changchun, China
| | - Shengnan Mu
- Department of Radiology, The First Hospital of Jilin University, Jilin Provincial Key Laboratory of Medical Imaging and Big Data, Changchun, China
| | - Xiaotong Chu
- Department of Radiology, The First Hospital of Jilin University, Jilin Provincial Key Laboratory of Medical Imaging and Big Data, Changchun, China
| | - Jiahui Fu
- Department of Radiology, The First Hospital of Jilin University, Jilin Provincial Key Laboratory of Medical Imaging and Big Data, Changchun, China
| | - Chenghe Lin
- Department of Nuclear Medicine, The First Hospital of Jilin University, Changchun, China
| | - Huimao Zhang
- Department of Radiology, The First Hospital of Jilin University, Jilin Provincial Key Laboratory of Medical Imaging and Big Data, Changchun, China
| |
Collapse
|
23
|
Cheon W, Jeong S, Jeong JH, Lim YK, Shin D, Lee SB, Lee DY, Lee SU, Suh YG, Moon SH, Kim TH, Kim H. Interobserver Variability Prediction of Primary Gross Tumor in a Patient with Non-Small Cell Lung Cancer. Cancers (Basel) 2022; 14:cancers14235893. [PMID: 36497374 PMCID: PMC9741368 DOI: 10.3390/cancers14235893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 11/25/2022] [Accepted: 11/27/2022] [Indexed: 12/03/2022] Open
Abstract
This research addresses the problem of interobserver variability (IOV), in which different oncologists manually delineate varying primary gross tumor volume (pGTV) contours, adding risk to targeted radiation treatments. Thus, a method of IOV reduction is urgently needed. Hypothesizing that the radiation oncologist’s IOV may shrink with the aid of IOV maps, we propose IOV prediction network (IOV-Net), a deep-learning model that uses the fuzzy membership function to produce high-quality maps based on computed tomography (CT) images. To test the prediction accuracy, a ground-truth pGTV IOV map was created using the manual contour delineations of radiation therapy structures provided by five expert oncologists. Then, we tasked IOV-Net with producing a map of its own. The mean squared error (prediction vs. ground truth) and its standard deviation were 0.0038 and 0.0005, respectively. To test the clinical feasibility of our method, CT images were divided into two groups, and oncologists from our institution created manual contours with and without IOV map guidance. The Dice similarity coefficient and Jaccard index increased by ~6 and 7%, respectively, and the Hausdorff distance decreased by 2.5 mm, indicating a statistically significant IOV reduction (p < 0.05). Hence, IOV-net and its resultant IOV maps have the potential to improve radiation therapy efficacy worldwide.
Collapse
|
24
|
Xia X, Wang J, Liang S, Ye F, Tian MM, Hu W, Xu L. An attention base U-net for parotid tumor autosegmentation. Front Oncol 2022; 12:1028382. [PMID: 36505865 PMCID: PMC9730401 DOI: 10.3389/fonc.2022.1028382] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 10/26/2022] [Indexed: 11/25/2022] Open
Abstract
A parotid neoplasm is an uncommon condition that only accounts for less than 3% of all head and neck cancers, and they make up less than 0.3% of all new cancers diagnosed annually. Due to their nonspecific imaging features and heterogeneous nature, accurate preoperative diagnosis remains a challenge. Automatic parotid tumor segmentation may help physicians evaluate these tumors. Two hundred eighty-five patients diagnosed with benign or malignant parotid tumors were enrolled in this study. Parotid and tumor tissues were segmented by 3 radiologists on T1-weighted (T1w), T2-weighted (T2w) and T1-weighted contrast-enhanced (T1wC) MR images. These images were randomly divided into two datasets, including a training dataset (90%) and an validation dataset (10%). A 10-fold cross-validation was performed to assess the performance. An attention base U-net for parotid tumor autosegmentation was created on the MRI T1w, T2 and T1wC images. The results were evaluated in a separate dataset, and the mean Dice similarity coefficient (DICE) for both parotids was 0.88. The mean DICE for left and right tumors was 0.85 and 0.86, respectively. These results indicate that the performance of this model corresponds with the radiologist's manual segmentation. In conclusion, an attention base U-net for parotid tumor autosegmentation may assist physicians to evaluate parotid gland tumors.
Collapse
Affiliation(s)
- Xianwu Xia
- The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
- Department of Oncology Intervention, The Affiliated Municipal Hospital of Taizhou University, Taizhou, China
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Jiazhou Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Sheng Liang
- Department of Oncology Intervention, The Affiliated Municipal Hospital of Taizhou University, Taizhou, China
| | - Fangfang Ye
- Department of Oncology Intervention, The Affiliated Municipal Hospital of Taizhou University, Taizhou, China
| | - Min-Ming Tian
- Department of Oncology Intervention, Jiangxi University of Traditional Chinese Medicine, Nanchang, Jiangxi, China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Leiming Xu
- The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| |
Collapse
|
25
|
Hazarika RA, Maji AK, Syiem R, Sur SN, Kandar D. Hippocampus Segmentation Using U-Net Convolutional Network from Brain Magnetic Resonance Imaging (MRI). J Digit Imaging 2022; 35:893-909. [PMID: 35304675 PMCID: PMC9485390 DOI: 10.1007/s10278-022-00613-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 01/04/2022] [Accepted: 01/14/2022] [Indexed: 12/21/2022] Open
Abstract
Hippocampus is a part of the limbic system in human brain that plays an important role in forming memories and dealing with intellectual abilities. In most of the neurological disorders related to dementia, such as, Alzheimer's disease, hippocampus is one of the earliest affected regions. Because there are no effective dementia drugs, an ambient assisted living approach may help to prevent or slow the progression of dementia. By segmenting and analyzing the size/shape of hippocampus, it may be possible to classify the early dementia stages. Because of complex structure, traditional image segmentation techniques can't segment hippocampus accurately. Machine learning (ML) is a well known tool in medical image processing that can predict and deliver the outcomes accurately by learning from it's previous results. Convolutional Neural Networks (CNN) is one of the most popular ML algorithms. In this work, a U-Net Convolutional Network based approach is used for hippocampus segmentation from 2D brain images. It is observed that, the original U-Net architecture can segment hippocampus with an average performance rate of 93.6%, which outperforms all other discussed state-of-arts. By using a filter size of [Formula: see text], the original U-Net architecture performs a sequence of convolutional processes. We tweaked the architecture further to extract more relevant features by replacing all [Formula: see text] kernels with three alternative kernels of sizes [Formula: see text], [Formula: see text], and [Formula: see text]. It is observed that, the modified architecture achieved an average performance rate of 96.5%, which outperforms the original U-Net model convincingly.
Collapse
Affiliation(s)
- Ruhul Amin Hazarika
- Department of Information Technology, North Eastern Hill University, Shillong, Meghalaya 793022 India
| | - Arnab Kumar Maji
- Department of Information Technology, North Eastern Hill University, Shillong, Meghalaya 793022 India
| | - Raplang Syiem
- Department of Information Technology, North Eastern Hill University, Shillong, Meghalaya 793022 India
| | - Samarendra Nath Sur
- Department of Electronics and Communication Engineering, Sikkim Manipal Institute of Technology, Sikkim Manipal University, Majitar, Sikkim 737136 India
| | - Debdatta Kandar
- Department of Information Technology, North Eastern Hill University, Shillong, Meghalaya 793022 India
| |
Collapse
|
26
|
A Survey on Deep Learning for Precision Oncology. Diagnostics (Basel) 2022; 12:diagnostics12061489. [PMID: 35741298 PMCID: PMC9222056 DOI: 10.3390/diagnostics12061489] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 06/14/2022] [Accepted: 06/14/2022] [Indexed: 12/27/2022] Open
Abstract
Precision oncology, which ensures optimized cancer treatment tailored to the unique biology of a patient’s disease, has rapidly developed and is of great clinical importance. Deep learning has become the main method for precision oncology. This paper summarizes the recent deep-learning approaches relevant to precision oncology and reviews over 150 articles within the last six years. First, we survey the deep-learning approaches categorized by various precision oncology tasks, including the estimation of dose distribution for treatment planning, survival analysis and risk estimation after treatment, prediction of treatment response, and patient selection for treatment planning. Secondly, we provide an overview of the studies per anatomical area, including the brain, bladder, breast, bone, cervix, esophagus, gastric, head and neck, kidneys, liver, lung, pancreas, pelvis, prostate, and rectum. Finally, we highlight the challenges and discuss potential solutions for future research directions.
Collapse
|
27
|
Crouzen JA, Petoukhova AL, Wiggenraad RGJ, Hutschemaekers S, Gadellaa-van Hooijdonk CGM, van der Voort van Zyp NCMG, Mast ME, Zindler JD. Development and evaluation of an automated EPTN-consensus based organ at risk atlas in the brain on MRI. Radiother Oncol 2022; 173:262-268. [PMID: 35714807 DOI: 10.1016/j.radonc.2022.06.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 04/29/2022] [Accepted: 06/08/2022] [Indexed: 11/19/2022]
Abstract
BACKGROUND AND PURPOSE During radiotherapy treatment planning, avoidance of organs at risk (OARs) is important. An international consensus-based delineation guideline was recently published with 34 OARs in the brain. We developed an MR-based OAR autosegmentation atlas and evaluated its performance compared to manual delineation. MATERIALS AND METHODS Anonymized cerebral T1-weighted MR scans (voxel size 0.9x0.9x0.9mm 3) were available. OARs were manually delineated according to international consensus. Fifty MR scans were used to develop the autosegmentation atlas in a commercially available treatment planning system (Raystation®). The performance of this atlas was tested on another 40 MR scans by automatically delineating 34 OARs, as defined by the 2018 EPTN consensus. Spatial overlap between manual and automated delineations was determined by calculating the Dice similarity coefficient (DSC). Two radiation oncologists determined the quality of each automatically delineated OAR. The time needed to delineate all OARs manually or to adjust automatically delineated OARs was determined. RESULTS DSC was ≥0.75 in 31 (91%) out of 34 automated OAR delineations. Delineations were rated by radiation oncologists as excellent or good in 29 (85%) out 34 OAR delineations, while 4 were rated fair (12%) and 1 was rated poor (3%). Interobserver agreement between the radiation oncologists ranged from 77-100% per OAR. The time to manually delineate all OARs was 88.5 minutes, while the time needed to adjust automatically delineated OARs was 15.8 minutes. CONCLUSION Autosegmentation of OARs enables high-quality contouring within a limited time. Accurate OAR delineation helps to define OAR constraints to mitigate serious complications and helps with the development of NTCP models.
Collapse
Affiliation(s)
- Jeroen A Crouzen
- Haaglanden Medical Center, Department of Radiotherapy, BA Leidschendam, The Netherlands.
| | - Anna L Petoukhova
- Haaglanden Medical Center, Department of Medical Physics, BA Leidschendam, The Netherlands.
| | - Ruud G J Wiggenraad
- Haaglanden Medical Center, Department of Radiotherapy, BA Leidschendam, The Netherlands
| | - Stefan Hutschemaekers
- Haaglanden Medical Center, Department of Radiotherapy, BA Leidschendam, The Netherlands.
| | | | | | - Mirjam E Mast
- Haaglanden Medical Center, Department of Radiotherapy, BA Leidschendam, The Netherlands.
| | - Jaap D Zindler
- Haaglanden Medical Center, Department of Radiotherapy, BA Leidschendam, The Netherlands.
| |
Collapse
|
28
|
Deng Y, Li C, Lv X, Xia W, Shen L, Jing B, Li B, Guo X, Sun Y, Xie C, Ke L. The contrast-enhanced MRI can be substituted by unenhanced MRI in identifying and automatically segmenting primary nasopharyngeal carcinoma with the aid of deep learning models: An exploratory study in large-scale population of endemic area. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 217:106702. [PMID: 35228147 DOI: 10.1016/j.cmpb.2022.106702] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 01/25/2022] [Accepted: 02/13/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVES Administration of contrast is not desirable for all cases in clinical setting, and no consensus in sequence selection for deep learning model development has been achieved, thus we aim to explore whether contrast-enhanced magnetic resonance imaging (ceMRI) can be substituted in the identification and segmentation of nasopharyngeal carcinoma (NPC) with the aid of deep learning models in a large-scale cohort. METHODS A total of 4478 eligible individuals were randomly split into training, validation and test sets, and self-constrained 3D DenseNet and V-Net models were developed using axial T1-weighted imaging (T1WI), T2WI or enhanced T1WI (T1WIC) images separately. The differential diagnostic performance between NPC and benign hyperplasia were compared among models using chi-square test. Segmentation evaluation metrics, including dice similarity coefficient (DSC) and average surface distance (ASD), were compared using paired student's t-test between T1WIC and T1WI or T2WI models or M_T1/T2, a merged output of malignant region derived from T1WI and T2WI models. RESULTS All models exhibited similar satisfactory diagnostic performance in discriminating NPC from benign hyperplasia, all attaining overall accuracy over 99.00% in all T stages of NPC. And T1WIC model exhibited similar average DSC and ASD with those of M_T1/T2 (DSC, 0.768±0.070 vs 0.764±0.070; ASD, 1.573±10.954 mm vs 1.626±10.975 mm 1.626±0.975 mm vs 1.573±0.954 mm, all p > 0.0167) in primary NPC using DenseNet, but yielded a significantly higher DSC and lower ASD than either T1WI model or T2WI model (DSC, 0.759±0.065 or 0.755±0.071; ASD, 1.661±0.898 mm or 1.722±1.133 mm, respectively, all p < 0.01) in the entire test set of NPC cohort. Moreover, the average DSCs and ASDs were not statistically significant between T1WIC model and M_T1/T2 in both.
Collapse
Affiliation(s)
- Yishu Deng
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou 510060, China; Department of Information, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Chaofeng Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou 510060, China; Department of Information, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China; Precision Medicine Center, Sun Yat-Sen University, Guangzhou 510060, China
| | - Xing Lv
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou 510060, China; Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Weixiong Xia
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou 510060, China; Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Lujun Shen
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou 510060, China; Department of Minimally Invasive Therapy, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Bingzhong Jing
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou 510060, China; Department of Information, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Bin Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou 510060, China; Department of Information, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Xiang Guo
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou 510060, China; Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Ying Sun
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou 510060, China; Department of Radiation Oncology, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China.
| | - Chuanmiao Xie
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou 510060, China; Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China.
| | - Liangru Ke
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou 510060, China; Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China.
| |
Collapse
|
29
|
Knuth F, Adde IA, Huynh BN, Groendahl AR, Winter RM, Negård A, Holmedal SH, Meltzer S, Ree AH, Flatmark K, Dueland S, Hole KH, Seierstad T, Redalen KR, Futsaether CM. MRI-based automatic segmentation of rectal cancer using 2D U-Net on two independent cohorts. Acta Oncol 2022; 61:255-263. [PMID: 34918621 DOI: 10.1080/0284186x.2021.2013530] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
BACKGROUND Tumor delineation is time- and labor-intensive and prone to inter- and intraobserver variations. Magnetic resonance imaging (MRI) provides good soft tissue contrast, and functional MRI captures tissue properties that may be valuable for tumor delineation. We explored MRI-based automatic segmentation of rectal cancer using a deep learning (DL) approach. We first investigated potential improvements when including both anatomical T2-weighted (T2w) MRI and diffusion-weighted MR images (DWI). Secondly, we investigated generalizability by including a second, independent cohort. MATERIAL AND METHODS Two cohorts of rectal cancer patients (C1 and C2) from different hospitals with 109 and 83 patients, respectively, were subject to 1.5 T MRI at baseline. T2w images were acquired for both cohorts and DWI (b-value of 500 s/mm2) for patients in C1. Tumors were manually delineated by three radiologists (two in C1, one in C2). A 2D U-Net was trained on T2w and T2w + DWI. Optimal parameters for image pre-processing and training were identified on C1 using five-fold cross-validation and patient Dice similarity coefficient (DSCp) as performance measure. The optimized models were evaluated on a C1 hold-out test set and the generalizability was investigated using C2. RESULTS For cohort C1, the T2w model resulted in a median DSCp of 0.77 on the test set. Inclusion of DWI did not further improve the performance (DSCp 0.76). The T2w-based model trained on C1 and applied to C2 achieved a DSCp of 0.59. CONCLUSION T2w MR-based DL models demonstrated high performance for automatic tumor segmentation, at the same level as published data on interobserver variation. DWI did not improve results further. Using DL models on unseen cohorts requires caution, and one cannot expect the same performance.
Collapse
Affiliation(s)
- Franziska Knuth
- Department of Physics, Norwegian University of Science and Technology, Trondheim, Norway
| | - Ingvild Askim Adde
- Department of Physics, Norwegian University of Science and Technology, Trondheim, Norway
| | - Bao Ngoc Huynh
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | | | - René Mario Winter
- Department of Physics, Norwegian University of Science and Technology, Trondheim, Norway
| | - Anne Negård
- Department of Radiology, Akershus University Hospital, Lørenskog, Norway
- Institute of Clinical Medicine, University of Oslo, Oslo, Norway
| | | | - Sebastian Meltzer
- Department of Oncology, Akershus University Hospital, Lørenskog, Norway
| | - Anne Hansen Ree
- Institute of Clinical Medicine, University of Oslo, Oslo, Norway
- Department of Oncology, Akershus University Hospital, Lørenskog, Norway
| | - Kjersti Flatmark
- Institute of Clinical Medicine, University of Oslo, Oslo, Norway
- Department of Gastroenterological Surgery, Oslo University Hospital, Oslo, Norway
| | - Svein Dueland
- Department of Oncology, Oslo University Hospital, Oslo, Norway
| | - Knut Håkon Hole
- Institute of Clinical Medicine, University of Oslo, Oslo, Norway
- Division of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway
| | - Therese Seierstad
- Division of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway
| | - Kathrine Røe Redalen
- Department of Physics, Norwegian University of Science and Technology, Trondheim, Norway
| | | |
Collapse
|
30
|
Groendahl AR, Moe YM, Kaushal CK, Huynh BN, Rusten E, Tomic O, Hernes E, Hanekamp B, Undseth C, Guren MG, Malinen E, Futsaether CM. Deep learning-based automatic delineation of anal cancer gross tumour volume: a multimodality comparison of CT, PET and MRI. Acta Oncol 2022; 61:89-96. [PMID: 34783610 DOI: 10.1080/0284186x.2021.1994645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
BACKGROUND Accurate target volume delineation is a prerequisite for high-precision radiotherapy. However, manual delineation is resource-demanding and prone to interobserver variation. An automatic delineation approach could potentially save time and increase delineation consistency. In this study, the applicability of deep learning for fully automatic delineation of the gross tumour volume (GTV) in patients with anal squamous cell carcinoma (ASCC) was evaluated for the first time. An extensive comparison of the effects single modality and multimodality combinations of computed tomography (CT), positron emission tomography (PET), and magnetic resonance imaging (MRI) have on automatic delineation quality was conducted. MATERIAL AND METHODS 18F-fluorodeoxyglucose PET/CT and contrast-enhanced CT (ceCT) images were collected for 86 patients with ASCC. A subset of 36 patients also underwent a study-specific 3T MRI examination including T2- and diffusion-weighted imaging. The resulting two datasets were analysed separately. A two-dimensional U-Net convolutional neural network (CNN) was trained to delineate the GTV in axial image slices based on single or multimodality image input. Manual GTV delineations constituted the ground truth for CNN model training and evaluation. Models were evaluated using the Dice similarity coefficient (Dice) and surface distance metrics computed from five-fold cross-validation. RESULTS CNN-generated automatic delineations demonstrated good agreement with the ground truth, resulting in mean Dice scores of 0.65-0.76 and 0.74-0.83 for the 86 and 36-patient datasets, respectively. For both datasets, the highest mean Dice scores were obtained using a multimodal combination of PET and ceCT (0.76-0.83). However, models based on single modality ceCT performed comparably well (0.74-0.81). T2W-only models performed acceptably but were somewhat inferior to the PET/ceCT and ceCT-based models. CONCLUSION CNNs provided high-quality automatic GTV delineations for both single and multimodality image input, indicating that deep learning may prove a versatile tool for target volume delineation in future patients with ASCC.
Collapse
Affiliation(s)
| | - Yngve Mardal Moe
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | | | - Bao Ngoc Huynh
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Espen Rusten
- Department of Medical Physics, Oslo University Hospital, Oslo, Norway
| | - Oliver Tomic
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Eivor Hernes
- Department of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway
| | - Bettina Hanekamp
- Department of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway
| | | | - Marianne Grønlie Guren
- Department of Oncology, Oslo University Hospital, Oslo, Norway
- Division of Cancer Medicine, Institute of Clinical Medicine, University of Oslo, Oslo, Norway
| | - Eirik Malinen
- Department of Medical Physics, Oslo University Hospital, Oslo, Norway
- Department of Physics, University of Oslo, Oslo, Norway
| | | |
Collapse
|
31
|
Jiang Y, Xu S, Fan H, Qian J, Luo W, Zhen S, Tao Y, Sun J, Lin H. ALA-Net: Adaptive Lesion-Aware Attention Network for 3D Colorectal Tumor Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3627-3640. [PMID: 34197319 DOI: 10.1109/tmi.2021.3093982] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Accurate and reliable segmentation of colorectal tumors and surrounding colorectal tissues on 3D magnetic resonance images has critical importance in preoperative prediction, staging, and radiotherapy. Previous works simply combine multilevel features without aggregating representative semantic information and without compensating for the loss of spatial information caused by down-sampling. Therefore, they are vulnerable to noise from complex backgrounds and suffer from misclassification and target incompleteness-related failures. In this paper, we address these limitations with a novel adaptive lesion-aware attention network (ALA-Net) which explicitly integrates useful contextual information with spatial details and captures richer feature dependencies based on 3D attention mechanisms. The model comprises two parallel encoding paths. One of these is designed to explore global contextual features and enlarge the receptive field using a recurrent strategy. The other captures sharper object boundaries and the details of small objects that are lost in repeated down-sampling layers. Our lesion-aware attention module adaptively captures long-range semantic dependencies and highlights the most discriminative features, improving semantic consistency and completeness. Furthermore, we introduce a prediction aggregation module to combine multiscale feature maps and to further filter out irrelevant information for precise voxel-wise prediction. Experimental results show that ALA-Net outperforms state-of-the-art methods and inherently generalizes well to other 3D medical images segmentation tasks, providing multiple benefits in terms of target completeness, reduction of false positives, and accurate detection of ambiguous lesion regions.
Collapse
|
32
|
Huang YJ, Dou Q, Wang ZX, Liu LZ, Jin Y, Li CF, Wang L, Chen H, Xu RH. 3-D RoI-Aware U-Net for Accurate and Efficient Colorectal Tumor Segmentation. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:5397-5408. [PMID: 32248143 DOI: 10.1109/tcyb.2020.2980145] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Segmentation of colorectal cancerous regions from 3-D magnetic resonance (MR) images is a crucial procedure for radiotherapy. Automatic delineation from 3-D whole volumes is in urgent demand yet very challenging. Drawbacks of existing deep-learning-based methods for this task are two-fold: 1) extensive graphics processing unit (GPU) memory footprint of 3-D tensor limits the trainable volume size, shrinks effective receptive field, and therefore, degrades speed and segmentation performance and 2) in-region segmentation methods supported by region-of-interest (RoI) detection are either blind to global contexts, detail richness compromising, or too expensive for 3-D tasks. To tackle these drawbacks, we propose a novel encoder-decoder-based framework for 3-D whole volume segmentation, referred to as 3-D RoI-aware U-Net (3-D RU-Net). 3-D RU-Net fully utilizes the global contexts covering large effective receptive fields. Specifically, the proposed model consists of a global image encoder for global understanding-based RoI localization, and a local region decoder that operates on pyramid-shaped in-region global features, which is GPU memory efficient and thereby enables training and prediction with large 3-D whole volumes. To facilitate the global-to-local learning procedure and enhance contour detail richness, we designed a dice-based multitask hybrid loss function. The efficiency of the proposed framework enables an extensive model ensemble for further performance gain at acceptable extra computational costs. Over a dataset of 64 T2-weighted MR images, the experimental results of four-fold cross-validation show that our method achieved 75.5% dice similarity coefficient (DSC) in 0.61 s per volume on a GPU, which significantly outperforms competing methods in terms of accuracy and efficiency. The code is publicly available.
Collapse
|
33
|
Chen M, Wu S, Zhao W, Zhou Y, Zhou Y, Wang G. Application of deep learning to auto-delineation of target volumes and organs at risk in radiotherapy. Cancer Radiother 2021; 26:494-501. [PMID: 34711488 DOI: 10.1016/j.canrad.2021.08.020] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Revised: 07/30/2021] [Accepted: 08/04/2021] [Indexed: 11/28/2022]
Abstract
The technological advancement heralded the arrival of precision radiotherapy (RT), thereby increasing the therapeutic ratio and decreasing the side effects from treatment. Contour of target volumes (TV) and organs at risk (OARs) in RT is a complicated process. In recent years, automatic contouring of TV and OARs has rapidly developed due to the advances in deep learning (DL). This technology has the potential to save time and to reduce intra- or inter-observer variability. In this paper, the authors provide an overview of RT, introduce the concept of DL, summarize the data characteristics of the included literature, summarize the possible challenges for DL in the future, and discuss the possible research directions.
Collapse
Affiliation(s)
- M Chen
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China
| | - S Wu
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China
| | - W Zhao
- Bengbu Medical College, Bengbu, Anhui 233030, China
| | - Y Zhou
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China
| | - Y Zhou
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China
| | - G Wang
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China.
| |
Collapse
|
34
|
Kalantar R, Lin G, Winfield JM, Messiou C, Lalondrelle S, Blackledge MD, Koh DM. Automatic Segmentation of Pelvic Cancers Using Deep Learning: State-of-the-Art Approaches and Challenges. Diagnostics (Basel) 2021; 11:1964. [PMID: 34829310 PMCID: PMC8625809 DOI: 10.3390/diagnostics11111964] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 10/14/2021] [Accepted: 10/19/2021] [Indexed: 12/18/2022] Open
Abstract
The recent rise of deep learning (DL) and its promising capabilities in capturing non-explicit detail from large datasets have attracted substantial research attention in the field of medical image processing. DL provides grounds for technological development of computer-aided diagnosis and segmentation in radiology and radiation oncology. Amongst the anatomical locations where recent auto-segmentation algorithms have been employed, the pelvis remains one of the most challenging due to large intra- and inter-patient soft-tissue variabilities. This review provides a comprehensive, non-systematic and clinically-oriented overview of 74 DL-based segmentation studies, published between January 2016 and December 2020, for bladder, prostate, cervical and rectal cancers on computed tomography (CT) and magnetic resonance imaging (MRI), highlighting the key findings, challenges and limitations.
Collapse
Affiliation(s)
- Reza Kalantar
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
| | - Gigin Lin
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Chang Gung University, 5 Fuhsing St., Guishan, Taoyuan 333, Taiwan;
| | - Jessica M. Winfield
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Christina Messiou
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Susan Lalondrelle
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Matthew D. Blackledge
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
| | - Dow-Mu Koh
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| |
Collapse
|
35
|
Chang Y, Wang Z, Peng Z, Zhou J, Pi Y, Xu XG, Pei X. Clinical application and improvement of a CNN-based autosegmentation model for clinical target volumes in cervical cancer radiotherapy. J Appl Clin Med Phys 2021; 22:115-125. [PMID: 34643320 PMCID: PMC8598149 DOI: 10.1002/acm2.13440] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Revised: 09/16/2021] [Accepted: 09/17/2021] [Indexed: 12/29/2022] Open
Abstract
OBJECTIVE Clinical target volume (CTV) autosegmentation for cervical cancer is desirable for radiation therapy. Data heterogeneity and interobserver variability (IOV) limit the clinical adaptability of such methods. The adaptive method is proposed to improve the adaptability of CNN-based autosegmentation of CTV contours in cervical cancer. METHODS This study included 400 cervical cancer treatment planning cases with CTV delineated by radiation oncologists from three hospitals. The datasets were divided into five subdatasets (80 cases each). The cases in datasets 1, 2, and 3 were delineated by physicians A, B, and C, respectively. The cases in datasets 4 and 5 were delineated by multiple physicians. Dataset 1 was divided into training (50 cases), validation (10 cases), and testing (20 cases) cohorts, and they were used to construct the pretrained model. Datasets 2-5 were regarded as host datasets to evaluate the accuracy of the pretrained model. In the adaptive process, the pretrained model was fine-tuned to measure improvements by gradually adding more training cases selected from the host datasets. The accuracy of the autosegmentation model on each host dataset was evaluated using the corresponding test cases. The Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD_95) were used to evaluate the accuracy. RESULTS Before and after adaptive improvements, the average DSC values on the host datasets were 0.818 versus 0.882, 0.763 versus 0.810, 0.727 versus 0.772, and 0.679 versus 0.789, which are improvements of 7.82%, 6.16%, 6.19%, and 16.05%, respectively. The average HD_95 values were 11.143 mm versus 6.853 mm, 22.402 mm versus 14.076 mm, 28.145 mm versus 16.437 mm, and 33.034 mm versus 16.441 mm, which are improvements of 37.94%, 37.17%, 41.60%, and 50.23%, respectively. CONCLUSION The proposed method improved the adaptability of the CNN-based autosegmentation model when applied to host datasets.
Collapse
Affiliation(s)
- Yankui Chang
- Institute of Nuclear Medical Physics, University of Science and Technology of China, Hefei, China
| | - Zhi Wang
- Institute of Nuclear Medical Physics, University of Science and Technology of China, Hefei, China.,Radiation Oncology Department, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Zhao Peng
- Institute of Nuclear Medical Physics, University of Science and Technology of China, Hefei, China
| | - Jieping Zhou
- Radiation Oncology Department, First Affiliated Hospital of University of Science and Technology of China, Hefei, China
| | - Yifei Pi
- Radiation Oncology Department, First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - X George Xu
- Institute of Nuclear Medical Physics, University of Science and Technology of China, Hefei, China.,Radiation Oncology Department, First Affiliated Hospital of University of Science and Technology of China, Hefei, China
| | - Xi Pei
- Institute of Nuclear Medical Physics, University of Science and Technology of China, Hefei, China.,Anhui Wisdom Technology Co., Ltd., Hefei, Anhui, China
| |
Collapse
|
36
|
Fang Y, Wang J, Ou X, Ying H, Hu C, Zhang Z, Hu W. The impact of training sample size on deep learning-based organ auto-segmentation for head-and-neck patients. Phys Med Biol 2021; 66. [PMID: 34450599 DOI: 10.1088/1361-6560/ac2206] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Accepted: 08/27/2021] [Indexed: 12/23/2022]
Abstract
To investigate the impact of training sample size on the performance of deep learning-based organ auto-segmentation for head-and-neck cancer patients, a total of 1160 patients with head-and-neck cancer who received radiotherapy were enrolled in this study. Patient planning CT images and regions of interest (ROIs) delineation, including the brainstem, spinal cord, eyes, lenses, optic nerves, temporal lobes, parotids, larynx and body, were collected. An evaluation dataset with 200 patients were randomly selected and combined with Dice similarity index to evaluate the model performances. Eleven training datasets with different sample sizes were randomly selected from the remaining 960 patients to form auto-segmentation models. All models used the same data augmentation methods, network structures and training hyperparameters. A performance estimation model of the training sample size based on the inverse power law function was established. Different performance change patterns were found for different organs. Six organs had the best performance with 800 training samples and others achieved their best performance with 600 training samples or 400 samples. The benefit of increasing the size of the training dataset gradually decreased. Compared to the best performance, optic nerves and lenses reached 95% of their best effect at 200, and the other organs reached 95% of their best effect at 40. For the fitting effect of the inverse power law function, the fitted root mean square errors of all ROIs were less than 0.03 (left eye: 0.024, others: <0.01), and theRsquare of all ROIs except for the body was greater than 0.5. The sample size has a significant impact on the performance of deep learning-based auto-segmentation. The relationship between sample size and performance depends on the inherent characteristics of the organ. In some cases, relatively small samples can achieve satisfactory performance.
Collapse
Affiliation(s)
- Yingtao Fang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| | - Jiazhou Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| | - Xiaomin Ou
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| | - Hongmei Ying
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| | - Chaosu Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| | - Zhen Zhang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| |
Collapse
|
37
|
Douglass MJJ, Keal JA. DeepWL: Robust EPID based Winston-Lutz analysis using deep learning, synthetic image generation and optical path-tracing. Phys Med 2021; 89:306-316. [PMID: 34492498 DOI: 10.1016/j.ejmp.2021.08.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 08/03/2021] [Accepted: 08/27/2021] [Indexed: 12/23/2022] Open
Abstract
Radiation therapy requires clinical linear accelerators to be mechanically and dosimetrically calibrated to a high standard. One important quality assurance test is the Winston-Lutz test which localises the radiation isocentre of the linac. In the current work we demonstrate a novel method of analysing EPID based Winston-Lutz QA images using a deep learning model trained only on synthetic image data. In addition, we propose a novel method of generating the synthetic WL images and associated 'ground-truth' masks using an optical path-tracing engine to 'fake' mega-voltage EPID images. The model called DeepWL was trained on 1500 synthetic WL images using data augmentation techniques for 180 epochs. The model was built using Keras with a TensorFlow backend on an Intel Core i5-6500T CPU and trained in approximately 15 h. DeepWL was shown to produce ball bearing and multi-leaf collimator field segmentations with a mean dice coefficient of 0.964 and 0.994 respectively on previously unseen synthetic testing data. When DeepWL was applied to WL data measured on an EPID, the predicted mean displacements were shown to be statistically similar to the Canny Edge detection method. However, the DeepWL predictions for the ball bearing locations were shown to correlate better with manual annotations compared with the Canny edge detection algorithm. DeepWL was demonstrated to analyse Winston-Lutz images with an accuracy suitable for routine linac quality assurance with some statistical evidence that it may outperform Canny Edge detection methods in terms of segmentation robustness and the resultant displacement predictions.
Collapse
Affiliation(s)
- Michael John James Douglass
- School of Physical Sciences, University of Adelaide, Adelaide 5005, South Australia, Australia; Department of Medical Physics, Royal Adelaide Hospital, Adelaide 5000, South Australia, Australia.
| | - James Alan Keal
- School of Physical Sciences, University of Adelaide, Adelaide 5005, South Australia, Australia
| |
Collapse
|
38
|
Robert C, Munoz A, Moreau D, Mazurier J, Sidorski G, Gasnier A, Beldjoudi G, Grégoire V, Deutsch E, Meyer P, Simon L. Clinical implementation of deep-learning based auto-contouring tools-Experience of three French radiotherapy centers. Cancer Radiother 2021; 25:607-616. [PMID: 34389243 DOI: 10.1016/j.canrad.2021.06.023] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Revised: 06/17/2021] [Accepted: 06/18/2021] [Indexed: 12/23/2022]
Abstract
Deep-learning (DL)-based auto-contouring solutions have recently been proposed as a convincing alternative to decrease workload of target volumes and organs-at-risk (OAR) delineation in radiotherapy planning and improve inter-observer consistency. However, there is minimal literature of clinical implementations of such algorithms in a clinical routine. In this paper we first present an update of the state-of-the-art of DL-based solutions. We then summarize recent recommendations proposed by the European society for radiotherapy and oncology (ESTRO) to be followed before any clinical implementation of artificial intelligence-based solutions in clinic. The last section describes the methodology carried out by three French radiation oncology departments to deploy CE-marked commercial solutions. Based on the information collected, a majority of OAR are retained by the centers among those proposed by the manufacturers, validating the usefulness of DL-based models to decrease clinicians' workload. Target volumes, with the exception of lymph node areas in breast, head and neck and pelvic regions, whole breast, breast wall, prostate and seminal vesicles, are not available in the three commercial solutions at this time. No implemented workflows are currently available to continuously improve the models, but these can be adapted/retrained in some solutions during the commissioning phase to best fit local practices. In reported experiences, automatic workflows were implemented to limit human interactions and make the workflow more fluid. Recommendations published by the ESTRO group will be of importance for guiding physicists in the clinical implementation of patient specific and regular quality assurances.
Collapse
Affiliation(s)
- C Robert
- Department of Radiotherapy, Gustave-Roussy, Villejuif, France.
| | - A Munoz
- Department of Radiotherapy, Centre Léon-Bérard, Lyon, France
| | - D Moreau
- Department of Radiotherapy, Hôpital Européen Georges-Pompidou, Paris, France
| | - J Mazurier
- Department of Radiotherapy, Clinique Pasteur-Oncorad, Toulouse, France
| | - G Sidorski
- Department of Radiotherapy, Clinique Pasteur-Oncorad, Toulouse, France
| | - A Gasnier
- Department of Radiotherapy, Gustave-Roussy, Villejuif, France
| | - G Beldjoudi
- Department of Radiotherapy, Centre Léon-Bérard, Lyon, France
| | - V Grégoire
- Department of Radiotherapy, Centre Léon-Bérard, Lyon, France
| | - E Deutsch
- Department of Radiotherapy, Gustave-Roussy, Villejuif, France
| | - P Meyer
- Service d'Oncologie Radiothérapie, Institut de Cancérologie Strasbourg Europe (Icans), Strasbourg, France
| | - L Simon
- Institut Claudius Regaud (ICR), Institut Universitaire du Cancer de Toulouse - Oncopole (IUCT-O), Toulouse, France
| |
Collapse
|
39
|
Zhu HT, Zhang XY, Shi YJ, Li XT, Sun YS. Automatic segmentation of rectal tumor on diffusion-weighted images by deep learning with U-Net. J Appl Clin Med Phys 2021; 22:324-331. [PMID: 34343402 PMCID: PMC8425941 DOI: 10.1002/acm2.13381] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 03/28/2021] [Accepted: 07/06/2021] [Indexed: 12/17/2022] Open
Abstract
Purpose Manual delineation of a rectal tumor on a volumetric image is time‐consuming and subjective. Deep learning has been used to segment rectal tumors automatically on T2‐weighted images, but automatic segmentation on diffusion‐weighted imaging is challenged by noise, artifact, and low resolution. In this study, a volumetric U‐shaped neural network (U‐Net) is proposed to automatically segment rectal tumors on diffusion‐weighted images. Methods Three hundred patients of locally advanced rectal cancer were enrolled in this study and divided into a training group, a validation group, and a test group. The region of rectal tumor was delineated on the diffusion‐weighted images by experienced radiologists as the ground truth. A U‐Net was designed with a volumetric input of the diffusion‐weighted images and an output of segmentation with the same size. A semi‐automatic segmentation method was used for comparison by manually choosing a threshold of gray level and automatically selecting the largest connected region. Dice similarity coefficient (DSC) was calculated to evaluate the methods. Results On the test group, deep learning method (DSC = 0.675 ± 0.144, median DSC is 0.702, maximum DSC is 0.893, and minimum DSC is 0.297) showed higher segmentation accuracy than the semi‐automatic method (DSC = 0.614 ± 0.225, median DSC is 0.685, maximum DSC is 0.869, and minimum DSC is 0.047). Paired t‐test shows significant difference (T = 2.160, p = 0.035) in DSC between the deep learning method and the semi‐automatic method in the test group. Conclusion Volumetric U‐Net can automatically segment rectal tumor region on DWI images of locally advanced rectal cancer.
Collapse
Affiliation(s)
- Hai-Tao Zhu
- Department of Radiology, Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Peking University Cancer Hospital & Institute, Beijing, China
| | - Xiao-Yan Zhang
- Department of Radiology, Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Peking University Cancer Hospital & Institute, Beijing, China
| | - Yan-Jie Shi
- Department of Radiology, Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Peking University Cancer Hospital & Institute, Beijing, China
| | - Xiao-Ting Li
- Department of Radiology, Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Peking University Cancer Hospital & Institute, Beijing, China
| | - Ying-Shi Sun
- Department of Radiology, Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Peking University Cancer Hospital & Institute, Beijing, China
| |
Collapse
|
40
|
Zhang G, Chen L, Liu A, Pan X, Shu J, Han Y, Huan Y, Zhang J. Comparable Performance of Deep Learning-Based to Manual-Based Tumor Segmentation in KRAS/NRAS/BRAF Mutation Prediction With MR-Based Radiomics in Rectal Cancer. Front Oncol 2021; 11:696706. [PMID: 34395262 PMCID: PMC8358773 DOI: 10.3389/fonc.2021.696706] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 07/15/2021] [Indexed: 12/22/2022] Open
Abstract
Radiomic features extracted from segmented tumor regions have shown great power in gene mutation prediction, while deep learning–based (DL-based) segmentation helps to address the inherent limitations of manual segmentation. We therefore investigated whether deep learning–based segmentation is feasible in predicting KRAS/NRAS/BRAF mutations of rectal cancer using MR-based radiomics. In this study, we proposed DL-based segmentation models with 3D V-net architecture. One hundred and eight patients’ images (T2WI and DWI) were collected for training, and another 94 patients’ images were collected for validation. We evaluated the DL-based segmentation manner and compared it with the manual-based segmentation manner through comparing the gene prediction performance of six radiomics-based models on the test set. The performance of the DL-based segmentation was evaluated by Dice coefficients, which are 0.878 ± 0.214 and 0.955 ± 0.055 for T2WI and DWI, respectively. The performance of the radiomics-based model in gene prediction based on DL-segmented VOI was evaluated by AUCs (0.714 for T2WI, 0.816 for DWI, and 0.887 for T2WI+DWI), which were comparable to that of corresponding manual-based VOI (0.637 for T2WI, P=0.188; 0.872 for DWI, P=0.181; and 0.906 for T2WI+DWI, P=0.676). The results showed that 3D V-Net architecture could conduct reliable rectal cancer segmentation on T2WI and DWI images. All-relevant radiomics-based models presented similar performances in KRAS/NRAS/BRAF prediction between the two segmentation manners.
Collapse
Affiliation(s)
- Guangwen Zhang
- Department of Radiology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Lei Chen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Aie Liu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xianpan Pan
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Jun Shu
- Department of Radiology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Ye Han
- Department of Radiology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Yi Huan
- Department of Radiology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Jinsong Zhang
- Department of Radiology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| |
Collapse
|
41
|
Samarasinghe G, Jameson M, Vinod S, Field M, Dowling J, Sowmya A, Holloway L. Deep learning for segmentation in radiation therapy planning: a review. J Med Imaging Radiat Oncol 2021; 65:578-595. [PMID: 34313006 DOI: 10.1111/1754-9485.13286] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Accepted: 06/29/2021] [Indexed: 12/21/2022]
Abstract
Segmentation of organs and structures, as either targets or organs-at-risk, has a significant influence on the success of radiation therapy. Manual segmentation is a tedious and time-consuming task for clinicians, and inter-observer variability can affect the outcomes of radiation therapy. The recent hype over deep neural networks has added many powerful auto-segmentation methods as variations of convolutional neural networks (CNN). This paper presents a descriptive review of the literature on deep learning techniques for segmentation in radiation therapy planning. The most common CNN architecture across the four clinical sub sites considered was U-net, with the majority of deep learning segmentation articles focussed on head and neck normal tissue structures. The most common data sets were CT images from an inhouse source, along with some public data sets. N-fold cross-validation was commonly employed; however, not all work separated training, test and validation data sets. This area of research is expanding rapidly. To facilitate comparisons of proposed methods and benchmarking, consistent use of appropriate metrics and independent validation should be carefully considered.
Collapse
Affiliation(s)
- Gihan Samarasinghe
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia
| | - Michael Jameson
- Genesiscare, Sydney, New South Wales, Australia.,St Vincent's Clinical School, University of New South Wales, Sydney, New South Wales, Australia
| | - Shalini Vinod
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| | - Matthew Field
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| | - Jason Dowling
- Commonwealth Scientific and Industrial Research Organisation, Australian E-Health Research Centre, Herston, Queensland, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia
| | - Lois Holloway
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| |
Collapse
|
42
|
Shusharina N, Söderberg J, Lidberg D, Niyazi M, Shih HA, Bortfeld T. Accounting for uncertainties in the position of anatomical barriers used to define the clinical target volume. Phys Med Biol 2021; 66. [PMID: 34171846 DOI: 10.1088/1361-6560/ac0ea3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 06/25/2021] [Indexed: 11/11/2022]
Abstract
The definition of the clinical target volume (CTV) is becoming the weakest link in the radiotherapy chain. CTV definition consensus guidelines include the geometric expansion beyond the visible gross tumor volume, while avoiding anatomical barriers. In a previous publication we described how to implement these consensus guidelines using deep learning and graph search techniques in a computerized CTV auto-delineation process. In this paper we address the remaining problem of how to deal with uncertainties in positions of the anatomical barriers. The objective was to develop an algorithm that implements the consensus guidelines on considering barrier uncertainties. Our approach is to perform multiple expansions using the fast marching method with barriers in place or removed at different stages of the expansion. We validate the algorithm in a computational phantom and compare manually generated with automated CTV contours, both taking barrier uncertainties into account.
Collapse
Affiliation(s)
- Nadya Shusharina
- Division of Radiation Biophysics, Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, United States of America
| | | | | | - Maximilian Niyazi
- Department of Radiation Oncology, University Hospital, LMU Munich, Munich, Germany.,German Cancer Consortium (DKTK), Partner Site Munich, Munich, Germany
| | - Helen A Shih
- Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, United States of America
| | - Thomas Bortfeld
- Division of Radiation Biophysics, Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, United States of America
| |
Collapse
|
43
|
Berbís MA, Aneiros-Fernández J, Mendoza Olivares FJ, Nava E, Luna A. Role of artificial intelligence in multidisciplinary imaging diagnosis of gastrointestinal diseases. World J Gastroenterol 2021; 27:4395-4412. [PMID: 34366612 PMCID: PMC8316909 DOI: 10.3748/wjg.v27.i27.4395] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 04/14/2021] [Accepted: 06/07/2021] [Indexed: 02/06/2023] Open
Abstract
The use of artificial intelligence-based tools is regarded as a promising approach to increase clinical efficiency in diagnostic imaging, improve the interpretability of results, and support decision-making for the detection and prevention of diseases. Radiology, endoscopy and pathology images are suitable for deep-learning analysis, potentially changing the way care is delivered in gastroenterology. The aim of this review is to examine the key aspects of different neural network architectures used for the evaluation of gastrointestinal conditions, by discussing how different models behave in critical tasks, such as lesion detection or characterization (i.e. the distinction between benign and malignant lesions of the esophagus, the stomach and the colon). To this end, we provide an overview on recent achievements and future prospects in deep learning methods applied to the analysis of radiology, endoscopy and histologic whole-slide images of the gastrointestinal tract.
Collapse
Affiliation(s)
| | - José Aneiros-Fernández
- Department of Pathology, Hospital Universitario Clínico San Cecilio, Granada 18012, Spain
| | | | - Enrique Nava
- Department of Communications Engineering, University of Málaga, Malaga 29016, Spain
| | - Antonio Luna
- MRI Unit, Department of Radiology, HT Médica, Jaén 23007, Spain
| |
Collapse
|
44
|
Guo H, Wang J, Xia X, Zhong Y, Peng J, Zhang Z, Hu W. The dosimetric impact of deep learning-based auto-segmentation of organs at risk on nasopharyngeal and rectal cancer. Radiat Oncol 2021; 16:113. [PMID: 34162410 PMCID: PMC8220801 DOI: 10.1186/s13014-021-01837-y] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2021] [Accepted: 06/10/2021] [Indexed: 12/25/2022] Open
Abstract
Purpose To investigate the dosimetric impact of deep learning-based auto-segmentation of organs at risk (OARs) on nasopharyngeal and rectal cancer. Methods and materials Twenty patients, including ten nasopharyngeal carcinoma (NPC) patients and ten rectal cancer patients, who received radiotherapy in our department were enrolled in this study. Two deep learning-based auto-segmentation systems, including an in-house developed system (FD) and a commercial product (UIH), were used to generate two auto-segmented OARs sets (OAR_FD and OAR_UIH). Treatment plans based on auto-segmented OARs and following our clinical requirements were generated for each patient on each OARs sets (Plan_FD and Plan_UIH). Geometric metrics (Hausdorff distance (HD), mean distance to agreement (MDA), the Dice similarity coefficient (DICE) and the Jaccard index) were calculated for geometric evaluation. The dosimetric impact was evaluated by comparing Plan_FD and Plan_UIH to original clinically approved plans (Plan_Manual) with dose-volume metrics and 3D gamma analysis. Spearman’s correlation analysis was performed to investigate the correlation between dosimetric difference and geometric metrics. Results FD and UIH could provide similar geometric performance in parotids, temporal lobes, lens, and eyes (DICE, p > 0.05). OAR_FD had better geometric performance in the optic nerves, oral cavity, larynx, and femoral heads (DICE, p < 0.05). OAR_UIH had better geometric performance in the bladder (DICE, p < 0.05). In dosimetric analysis, both Plan_FD and Plan_UIH had nonsignificant dosimetric differences compared to Plan_Manual for most PTV and OARs dose-volume metrics. The only significant dosimetric difference was the max dose of the left temporal lobe for Plan_FD vs. Plan_Manual (p = 0.05). Only one significant correlation was found between the mean dose of the femoral head and its HD index (R = 0.4, p = 0.01), there is no OARs showed strong correlation between its dosimetric difference and all of four geometric metrics. Conclusions Deep learning-based OARs auto-segmentation for NPC and rectal cancer has a nonsignificant impact on most PTV and OARs dose-volume metrics. Correlations between the auto-segmentation geometric metric and dosimetric difference were not observed for most OARs. Supplementary Information The online version contains supplementary material available at 10.1186/s13014-021-01837-y.
Collapse
Affiliation(s)
- Hongbo Guo
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, 200032, China
| | - Jiazhou Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, 200032, China
| | - Xiang Xia
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, 200032, China
| | - Yang Zhong
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, 200032, China
| | - Jiayuan Peng
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, 200032, China
| | - Zhen Zhang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China. .,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China. .,Shanghai Key Laboratory of Radiation Oncology, Shanghai, 200032, China.
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China. .,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China.
| |
Collapse
|
45
|
Wong J, Huang V, Giambattista JA, Teke T, Kolbeck C, Giambattista J, Atrchian S. Training and Validation of Deep Learning-Based Auto-Segmentation Models for Lung Stereotactic Ablative Radiotherapy Using Retrospective Radiotherapy Planning Contours. Front Oncol 2021; 11:626499. [PMID: 34164335 PMCID: PMC8215371 DOI: 10.3389/fonc.2021.626499] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Accepted: 05/14/2021] [Indexed: 12/22/2022] Open
Abstract
PURPOSE Deep learning-based auto-segmented contour (DC) models require high quality data for their development, and previous studies have typically used prospectively produced contours, which can be resource intensive and time consuming to obtain. The aim of this study was to investigate the feasibility of using retrospective peer-reviewed radiotherapy planning contours in the training and evaluation of DC models for lung stereotactic ablative radiotherapy (SABR). METHODS Using commercial deep learning-based auto-segmentation software, DC models for lung SABR organs at risk (OAR) and gross tumor volume (GTV) were trained using a deep convolutional neural network and a median of 105 contours per structure model obtained from 160 publicly available CT scans and 50 peer-reviewed SABR planning 4D-CT scans from center A. DCs were generated for 50 additional planning CT scans from center A and 50 from center B, and compared with the clinical contours (CC) using the Dice Similarity Coefficient (DSC) and 95% Hausdorff distance (HD). RESULTS Comparing DCs to CCs, the mean DSC and 95% HD were 0.93 and 2.85mm for aorta, 0.81 and 3.32mm for esophagus, 0.95 and 5.09mm for heart, 0.98 and 2.99mm for bilateral lung, 0.52 and 7.08mm for bilateral brachial plexus, 0.82 and 4.23mm for proximal bronchial tree, 0.90 and 1.62mm for spinal cord, 0.91 and 2.27mm for trachea, and 0.71 and 5.23mm for GTV. DC to CC comparisons of center A and center B were similar for all OAR structures. CONCLUSIONS The DCs developed with retrospective peer-reviewed treatment contours approximated CCs for the majority of OARs, including on an external dataset. DCs for structures with more variability tended to be less accurate and likely require using a larger number of training cases or novel training approaches to improve performance. Developing DC models from existing radiotherapy planning contours appears feasible and warrants further clinical workflow testing.
Collapse
Affiliation(s)
- Jordan Wong
- Radiation Oncology, British Columbia Cancer – Vancouver, Vancouver, BC, Canada
| | - Vicky Huang
- Medical Physics, British Columbia Cancer – Fraser Valley, Surrey, BC, Canada
| | - Joshua A. Giambattista
- Radiation Oncology, Saskatchewan Cancer Agency, Regina, SK, Canada
- Limbus AI Inc, Regina, SK, Canada
| | - Tony Teke
- Medical Physics/Radiation Oncology, British Columbia Cancer – Kelowna, Kelowna, BC, Canada
| | | | | | - Siavash Atrchian
- Medical Physics/Radiation Oncology, British Columbia Cancer – Kelowna, Kelowna, BC, Canada
| |
Collapse
|
46
|
Cao B, Zhang KC, Wei B, Chen L. Status quo and future prospects of artificial neural network from the perspective of gastroenterologists. World J Gastroenterol 2021; 27:2681-2709. [PMID: 34135549 PMCID: PMC8173384 DOI: 10.3748/wjg.v27.i21.2681] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 03/29/2021] [Accepted: 04/21/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial neural networks (ANNs) are one of the primary types of artificial intelligence and have been rapidly developed and used in many fields. In recent years, there has been a sharp increase in research concerning ANNs in gastrointestinal (GI) diseases. This state-of-the-art technique exhibits excellent performance in diagnosis, prognostic prediction, and treatment. Competitions between ANNs and GI experts suggest that efficiency and accuracy might be compatible in virtue of technique advancements. However, the shortcomings of ANNs are not negligible and may induce alterations in many aspects of medical practice. In this review, we introduce basic knowledge about ANNs and summarize the current achievements of ANNs in GI diseases from the perspective of gastroenterologists. Existing limitations and future directions are also proposed to optimize ANN's clinical potential. In consideration of barriers to interdisciplinary knowledge, sophisticated concepts are discussed using plain words and metaphors to make this review more easily understood by medical practitioners and the general public.
Collapse
Affiliation(s)
- Bo Cao
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Ke-Cheng Zhang
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Bo Wei
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Lin Chen
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| |
Collapse
|
47
|
Yang TH, Yang CW, Sun YN, Horng MH. A Fully-Automatic Segmentation of the Carpal Tunnel from Magnetic Resonance Images Based on the Convolutional Neural Network-Based Approach. J Med Biol Eng 2021. [DOI: 10.1007/s40846-021-00615-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Abstract
Purpose
Carpal tunnel syndrome is one of the common peripheral neuropathies. For magnetic resonance imaging, segmentation of the carpal tunnel and its contents, including flexor tendons and the median nerve for magnetic resonance images is an important issue. In this study, a convolutional neural network (CNN) model, which was modified by the original DeepLabv3 + model to segment three primary structures of the carpal tunnel: the carpal tunnel, flexor tendon, and median nerve.
Methods
To extract important feature maps for segmentation of the carpal tunnel, flexor tendon, and median nerve, the proposed CNN model termed modified DeepLabv3 + uses DenseNet-121 as a backbone and adds dilated convolution to the original spatial pyramid pooling module. A MaskTrack method was used to refine the segmented results generated by modified DeepLabv3 + , which have a small and blurred appearance. For evaluation of the segmentation results, the average Dice similarity coefficients (ADSC) were used as the performance index.
Results
Sixteen MR images corresponding to different subjects were obtained from the National Cheng Kung University Hospital. Our proposed modified DeepLabv3 + generated the following ADSCs: 0.928 for carpal tunnel, 0.872 for flexor tendons and 0.785 for the median nerve. The ADSC value of 0.8053 generated the MaskTrack that 0.22 ADSC measure were improved for measuring the median nerve.
Conclusions
The experimental results showed that the proposed modified DeepLabv3 + model can promote segmentations of the carpal tunnel and its contents. The results are superior to the results generated by original DeepLabv3 + . Additionally, MaskTrack can also effectively refine median nerve segmentations.
Collapse
|
48
|
Cusumano D, Boldrini L, Dhont J, Fiorino C, Green O, Güngör G, Jornet N, Klüter S, Landry G, Mattiucci GC, Placidi L, Reynaert N, Ruggieri R, Tanadini-Lang S, Thorwarth D, Yadav P, Yang Y, Valentini V, Verellen D, Indovina L. Artificial Intelligence in magnetic Resonance guided Radiotherapy: Medical and physical considerations on state of art and future perspectives. Phys Med 2021; 85:175-191. [PMID: 34022660 DOI: 10.1016/j.ejmp.2021.05.010] [Citation(s) in RCA: 60] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Revised: 04/15/2021] [Accepted: 05/04/2021] [Indexed: 12/14/2022] Open
Abstract
Over the last years, technological innovation in Radiotherapy (RT) led to the introduction of Magnetic Resonance-guided RT (MRgRT) systems. Due to the higher soft tissue contrast compared to on-board CT-based systems, MRgRT is expected to significantly improve the treatment in many situations. MRgRT systems may extend the management of inter- and intra-fraction anatomical changes, offering the possibility of online adaptation of the dose distribution according to daily patient anatomy and to directly monitor tumor motion during treatment delivery by means of a continuous cine MR acquisition. Online adaptive treatments require a multidisciplinary and well-trained team, able to perform a series of operations in a safe, precise and fast manner while the patient is waiting on the treatment couch. Artificial Intelligence (AI) is expected to rapidly contribute to MRgRT, primarily by safely and efficiently automatising the various manual operations characterizing online adaptive treatments. Furthermore, AI is finding relevant applications in MRgRT in the fields of image segmentation, synthetic CT reconstruction, automatic (on-line) planning and the development of predictive models based on daily MRI. This review provides a comprehensive overview of the current AI integration in MRgRT from a medical physicist's perspective. Medical physicists are expected to be major actors in solving new tasks and in taking new responsibilities: their traditional role of guardians of the new technology implementation will change with increasing emphasis on the managing of AI tools, processes and advanced systems for imaging and data analysis, gradually replacing many repetitive manual tasks.
Collapse
Affiliation(s)
- Davide Cusumano
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Rome, Italy
| | - Luca Boldrini
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Rome, Italy
| | | | - Claudio Fiorino
- Medical Physics, San Raffaele Scientific Institute, Milan, Italy
| | - Olga Green
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO, USA
| | - Görkem Güngör
- Acıbadem MAA University, School of Medicine, Department of Radiation Oncology, Maslak Istanbul, Turkey
| | - Núria Jornet
- Servei de Radiofísica i Radioprotecció, Hospital de la Santa Creu i Sant Pau, Spain
| | - Sebastian Klüter
- Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Guillaume Landry
- Department of Radiation Oncology, LMU Munich, Munich, Germany; German Cancer Consortium (DKTK), Munich, Germany
| | | | - Lorenzo Placidi
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Rome, Italy.
| | - Nick Reynaert
- Department of Medical Physics, Institut Jules Bordet, Belgium
| | - Ruggero Ruggieri
- Dipartimento di Radioterapia Oncologica Avanzata, IRCCS "Sacro cuore - don Calabria", Negrar di Valpolicella (VR), Italy
| | - Stephanie Tanadini-Lang
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Daniela Thorwarth
- Section for Biomedical Physics, Department of Radiation Oncology, University Hospital Tüebingen, Tübingen, Germany
| | - Poonam Yadav
- Department of Human Oncology School of Medicine and Public Heath University of Wisconsin - Madison, USA
| | - Yingli Yang
- Department of Radiation Oncology, David Geffen School of Medicine, University of California Los Angeles, USA
| | - Vincenzo Valentini
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Rome, Italy
| | - Dirk Verellen
- Department of Medical Physics, Iridium Cancer Network, Belgium; Faculty of Medicine and Health Sciences, Antwerp University, Antwerp, Belgium
| | - Luca Indovina
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Rome, Italy
| |
Collapse
|
49
|
Wang PP, Deng CL, Wu B. Magnetic resonance imaging-based artificial intelligence model in rectal cancer. World J Gastroenterol 2021; 27:2122-2130. [PMID: 34025068 PMCID: PMC8117733 DOI: 10.3748/wjg.v27.i18.2122] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Revised: 02/23/2021] [Accepted: 03/16/2021] [Indexed: 02/06/2023] Open
Abstract
Rectal magnetic resonance imaging (MRI) is the preferred method for the diagnosis of rectal cancer as recommended by the guidelines. Rectal MRI can accurately evaluate the tumor location, tumor stage, invasion depth, extramural vascular invasion, and circumferential resection margin. We summarize the progress of research on the use of artificial intelligence (AI) in rectal cancer in recent years. AI, represented by machine learning, is being increasingly used in the medical field. The application of AI models based on high-resolution MRI in rectal cancer has been increasingly reported. In addition to staging the diagnosis and localizing radiotherapy, an increasing number of studies have reported that AI models based on high-resolution MRI can be used to predict the response to chemotherapy and prognosis of patients.
Collapse
Affiliation(s)
- Pei-Pei Wang
- Department of General Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Chao-Lin Deng
- Department of General Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Bin Wu
- Department of General Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| |
Collapse
|
50
|
Michalet M, Azria D, Tardieu M, Tibermacine H, Nougaret S. Radiomics in radiation oncology for gynecological malignancies: a review of literature. Br J Radiol 2021; 94:20210032. [PMID: 33882246 DOI: 10.1259/bjr.20210032] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
Radiomics is the extraction of a significant number of quantitative imaging features with the aim of detecting information in correlation with useful clinical outcomes. Features are extracted, after delineation of an area of interest, from a single or a combined set of imaging modalities (including X-ray, US, CT, PET/CT and MRI). Given the high dimensionality, the analytical process requires the use of artificial intelligence algorithms. Firstly developed for diagnostic performance in radiology, it has now been translated to radiation oncology mainly to predict tumor response and patient outcome but other applications have been developed such as dose painting, prediction of side-effects, and quality assurance. In gynecological cancers, most studies have focused on outcomes of cervical cancers after chemoradiation. This review highlights the role of this new tool for the radiation oncologists with particular focus on female GU oncology.
Collapse
Affiliation(s)
- Morgan Michalet
- University Federation of Radiation Oncology of Mediterranean Occitanie, Montpellier Cancer Institute, Univ Montpellier, Montpellier, France.,INSERM U1194 IRCM, Montpellier, France
| | - David Azria
- University Federation of Radiation Oncology of Mediterranean Occitanie, Montpellier Cancer Institute, Univ Montpellier, Montpellier, France.,INSERM U1194 IRCM, Montpellier, France
| | | | | | | |
Collapse
|