1
|
Huang C, Sheng Y, Lian H, Zhang W, Lin H, Huang X, Tang N, Zhao L, Guo Y. AR-AI assisted ophthalmic nursing: Preliminary usability study in clinical settings. Digit Health 2024; 10:20552076241269470. [PMID: 39257872 PMCID: PMC11384517 DOI: 10.1177/20552076241269470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Accepted: 06/25/2024] [Indexed: 09/12/2024] Open
Abstract
Objective Ophthalmic ward nursing work is onerous and busy, and many researchers have tried to introduce artificial intelligence (AI) technology to assist nurses in performing nursing tasks. This study aims to use augmented reality (AR) and AI technology to develop an intelligent assistant system for ophthalmic ward nurses and evaluate the usability and acceptability of the system in assisting clinical work for nurses. Methods Based on AR technology, under the framework of deep learning, the system management, functions, and interfaces were completed using acoustic recognition, voice interaction, and image recognition technologies. Finally, an intelligent assistance system with functions such as patient face recognition, automatic information matching, and nursing work management was developed. Ophthalmic day ward nurses were invited to participate in filling out the System Usability Scale (SUS). Using the AR-based intelligent assistance system (AR-IAS) as the experimental group and the existing personal digital assistant (PDA) system as the control group. The experimental results of the three subscales of learnability, efficiency, and satisfaction of the usability scale were compared, and the clinical usability score of the AR-IAS system was calculated. Results This study showed that the AR-IAS and the PDA systems had learnability subscale scores of 22.50/30.00 and 21.00/30.00, respectively; efficiency subscale scores of 29.67/40.00 and 28.67/40.00, respectively; and satisfaction subscale scores of 23.67/30.00 and 23.17/30.00, respectively. The overall usability score of the AR-IAS system was 75.83/100.00. Conclusion Based on the analysis results of the System Usability Scale, the AR-IAS system developed using AR and AI technology has good overall usability and can be accepted by clinical nurses. It is suitable for use in ophthalmic nursing tasks and has clinical promotion and further research value.
Collapse
Affiliation(s)
- Changke Huang
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Yaying Sheng
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Hengli Lian
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Wenjie Zhang
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Hui Lin
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Xiaofang Huang
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Ning Tang
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Lvjun Zhao
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Yingxuan Guo
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| |
Collapse
|
2
|
Morya AK, Janti SS, Sisodiya P, Tejaswini A, Prasad R, Mali KR, Gurnani B. Everything real about unreal artificial intelligence in diabetic retinopathy and in ocular pathologies. World J Diabetes 2022; 13:822-834. [PMID: 36311999 PMCID: PMC9606792 DOI: 10.4239/wjd.v13.i10.822] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 08/11/2022] [Accepted: 09/09/2022] [Indexed: 02/05/2023] Open
Abstract
Artificial Intelligence is a multidisciplinary field with the aim of building platforms that can make machines act, perceive, reason intelligently and whose goal is to automate activities that presently require human intelligence. From the cornea to the retina, artificial intelligence (AI) is expected to help ophthalmologists diagnose and treat ocular diseases. In ophthalmology, computerized analytics are being viewed as efficient and more objective ways to interpret the series of images and come to a conclusion. AI can be used to diagnose and grade diabetic retinopathy, glaucoma, age-related macular degeneration, cataracts, IOL power calculation, retinopathy of prematurity and keratoconus. This review article intends to discuss various aspects of artificial intelligence in ophthalmology.
Collapse
Affiliation(s)
- Arvind Kumar Morya
- Department of Ophthalmology, All India Institute of Medical Sciences Bibinagar, Hyderabad 508126, Telangana, India
| | - Siddharam S Janti
- Department of Ophthalmology, All India Institute of Medical Sciences Bibinagar, Hyderabad 508126, Telangana, India
| | - Priya Sisodiya
- Department of Ophthalmology, Sadguru Netra Chikitsalaya, Chitrakoot 485001, Madhya Pradesh, India
| | - Antervedi Tejaswini
- Department of Ophthalmology, All India Institute of Medical Sciences Bibinagar, Hyderabad 508126, Telangana, India
| | - Rajendra Prasad
- Department of Ophthalmology, R P Eye Institute, New Delhi 110001, New Delhi, India
| | - Kalpana R Mali
- Department of Pharmacology, All India Institute of Medical Sciences, Bibinagar, Hyderabad 508126, Telangana, India
| | - Bharat Gurnani
- Department of Ophthalmology, Aravind Eye Hospital and Post Graduate Institute of Ophthalmology, Pondicherry 605007, Pondicherry, India
| |
Collapse
|
3
|
CDC-Net: Cascaded decoupled convolutional network for lesion-assisted detection and grading of retinopathy using optical coherence tomography (OCT) scans. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.103030] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
4
|
Hassan B, Qin S, Ahmed R, Hassan T, Taguri AH, Hashmi S, Werghi N. Deep learning based joint segmentation and characterization of multi-class retinal fluid lesions on OCT scans for clinical use in anti-VEGF therapy. Comput Biol Med 2021; 136:104727. [PMID: 34385089 DOI: 10.1016/j.compbiomed.2021.104727] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2021] [Revised: 07/31/2021] [Accepted: 08/01/2021] [Indexed: 11/19/2022]
Abstract
BACKGROUND In anti-vascular endothelial growth factor (anti-VEGF) therapy, an accurate estimation of multi-class retinal fluid (MRF) is required for the activity prescription and intravitreal dose. This study proposes an end-to-end deep learning-based retinal fluids segmentation network (RFS-Net) to segment and recognize three MRF lesion manifestations, namely, intraretinal fluid (IRF), subretinal fluid (SRF), and pigment epithelial detachment (PED), from multi-vendor optical coherence tomography (OCT) imagery. The proposed image analysis tool will optimize anti-VEGF therapy and contribute to reducing the inter- and intra-observer variability. METHOD The proposed RFS-Net architecture integrates the atrous spatial pyramid pooling (ASPP), residual, and inception modules in the encoder path to learn better features and conserve more global information for precise segmentation and characterization of MRF lesions. The RFS-Net model is trained and validated using OCT scans from multiple vendors (Topcon, Cirrus, Spectralis), collected from three publicly available datasets. The first dataset consisted of OCT volumes obtained from 112 subjects (a total of 11,334 B-scans) is used for both training and evaluation purposes. Moreover, the remaining two datasets are only used for evaluation purposes to check the trained RFS-Net's generalizability on unseen OCT scans. The two evaluation datasets contain a total of 1572 OCT B-scans from 1255 subjects. The performance of the proposed RFS-Net model is assessed through various evaluation metrics. RESULTS The proposed RFS-Net model achieved the mean F1 scores of 0.762, 0.796, and 0.805 for segmenting IRF, SRF, and PED. Moreover, with the automated segmentation of the three retinal manifestations, the RFS-Net brings a considerable gain in efficiency compared to the tedious and demanding manual segmentation procedure of the MRF. CONCLUSIONS Our proposed RFS-Net is a potential diagnostic tool for the automatic segmentation of MRF (IRF, SRF, and PED) lesions. It is expected to strengthen the inter-observer agreement, and standardization of dosimetry is envisaged as a result.
Collapse
Affiliation(s)
- Bilal Hassan
- School of Automation Science and Electrical Engineering, Beihang University (BUAA), Beijing, 100191, China.
| | - Shiyin Qin
- School of Automation Science and Electrical Engineering, Beihang University (BUAA), Beijing, 100191, China; School of Electrical Engineering and Intelligentization, Dongguan University of Technology, Dongguan, 523808, China
| | - Ramsha Ahmed
- School of Computer and Communication Engineering, University of Science and Technology Beijing (USTB), Beijing, 100083, China
| | - Taimur Hassan
- Center for Cyber-Physical Systems, Khalifa University of Science and Technology, Abu Dhabi, 127788, United Arab Emirates
| | - Abdel Hakeem Taguri
- Abu Dhabi Healthcare Company (SEHA), Abu Dhabi, 127788, United Arab Emirates
| | - Shahrukh Hashmi
- Abu Dhabi Healthcare Company (SEHA), Abu Dhabi, 127788, United Arab Emirates
| | - Naoufel Werghi
- Center for Cyber-Physical Systems, Khalifa University of Science and Technology, Abu Dhabi, 127788, United Arab Emirates
| |
Collapse
|
5
|
Zheng B, Jiang Q, Lu B, He K, Wu MN, Hao XL, Zhou HX, Zhu SJ, Yang WH. Five-Category Intelligent Auxiliary Diagnosis Model of Common Fundus Diseases Based on Fundus Images. Transl Vis Sci Technol 2021; 10:20. [PMID: 34132760 PMCID: PMC8212443 DOI: 10.1167/tvst.10.7.20] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2021] [Accepted: 05/08/2021] [Indexed: 02/06/2023] Open
Abstract
PURPOSE The discrepancy of the number between ophthalmologists and patients in China is large. Retinal vein occlusion (RVO), high myopia, glaucoma, and diabetic retinopathy (DR) are common fundus diseases. Therefore, in this study, a five-category intelligent auxiliary diagnosis model for common fundus diseases is proposed; the model's area of focus is marked. METHODS A total of 2000 fundus images were collected; 3 different 5-category intelligent auxiliary diagnosis models for common fundus diseases were trained via different transfer learning and image preprocessing techniques. A total of 1134 fundus images were used for testing. The clinical diagnostic results were compared with the diagnostic results. The main evaluation indicators included sensitivity, specificity, F1-score, area under the concentration-time curve (AUC), 95% confidence interval (CI), kappa, and accuracy. The interpretation methods were used to obtain the model's area of focus in the fundus image. RESULTS The accuracy rates of the 3 intelligent auxiliary diagnosis models on the 1134 fundus images were all above 90%, the kappa values were all above 88%, the diagnosis consistency was good, and the AUC approached 0.90. For the 4 common fundus diseases, the best results of sensitivity, specificity, and F1-scores of the 3 models were 88.27%, 97.12%, and 84.02%; 89.94%, 99.52%, and 93.90%; 95.24%, 96.43%, and 85.11%; and 88.24%, 98.21%, and 89.55%, respectively. CONCLUSIONS This study designed a five-category intelligent auxiliary diagnosis model for common fundus diseases. It can be used to obtain the diagnostic category of fundus images and the model's area of focus. TRANSLATIONAL RELEVANCE This study will help the primary doctors to provide effective services to all ophthalmologic patients.
Collapse
Affiliation(s)
- Bo Zheng
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China
- Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Qin Jiang
- Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| | - Bing Lu
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China
- Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Kai He
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China
- Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Mao-Nian Wu
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China
- Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Xiu-Lan Hao
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China
- Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Hong-Xia Zhou
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China
- Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
- College of Computer and Information, Hehai University, Nanjing, Jiangsu, China
| | - Shao-Jun Zhu
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China
- Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Wei-Hua Yang
- Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| |
Collapse
|
6
|
Song Z, Xu L, Wang J, Rasti R, Sastry A, Li JD, Raynor W, Izatt JA, Toth CA, Vajzovic L, Deng B, Farsiu S. Lightweight Learning-Based Automatic Segmentation of Subretinal Blebs on Microscope-Integrated Optical Coherence Tomography Images. Am J Ophthalmol 2021; 221:154-168. [PMID: 32707207 PMCID: PMC8120705 DOI: 10.1016/j.ajo.2020.07.020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2019] [Revised: 07/08/2020] [Accepted: 07/09/2020] [Indexed: 10/23/2022]
Abstract
PURPOSE Subretinal injections of therapeutics are commonly used to treat ocular diseases. Accurate dosing of therapeutics at target locations is crucial but difficult to achieve using subretinal injections due to leakage, and there is no method available to measure the volume of therapeutics successfully administered to the subretinal location during surgery. Here, we introduce the first automatic method for quantifying the volume of subretinal blebs, using porcine eyes injected with Ringer's lactate solution as samples. DESIGN Ex vivo animal study. METHODS Microscope-integrated optical coherence tomography was used to obtain 3D visualization of subretinal blebs in porcine eyes at Duke Eye Center. Two different injection phases were imaged and analyzed in 15 eyes (30 volumes), selected from a total of 37 eyes. The inclusion/exclusion criteria were set independently from the algorithm-development and testing team. A novel lightweight, deep learning-based algorithm was designed to segment subretinal bleb boundaries. A cross-validation method was used to avoid selection bias. An ensemble-classifier strategy was applied to generate final results for the test dataset. RESULTS The algorithm performs notably better than 4 other state-of-the-art deep learning-based segmentation methods, achieving an F1 score of 93.86 ± 1.17% and 96.90 ± 0.59% on the independent test data for entry and full blebs, respectively. CONCLUSION The proposed algorithm accurately segmented the volumetric boundaries of Ringer's lactate solution delivered into the subretinal space of porcine eyes with robust performance and real-time speed. This is the first step for future applications in computer-guided delivery of therapeutics into the subretinal space in human subjects.
Collapse
Affiliation(s)
- Zhenxi Song
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China; Department of Biomedical Engineering, Duke University, Durham, North Carolina, USA
| | - Liangyu Xu
- Department of Biomedical Engineering, Duke University, Durham, North Carolina, USA
| | - Jiang Wang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Reza Rasti
- Department of Biomedical Engineering, Duke University, Durham, North Carolina, USA
| | - Ananth Sastry
- Department of Ophthalmology, Duke University School of Medicine, Durham, North Carolina, USA
| | - Jianwei D Li
- Department of Biomedical Engineering, Duke University, Durham, North Carolina, USA
| | - William Raynor
- Department of Ophthalmology, Duke University School of Medicine, Durham, North Carolina, USA
| | - Joseph A Izatt
- Department of Biomedical Engineering, Duke University, Durham, North Carolina, USA; Department of Ophthalmology, Duke University School of Medicine, Durham, North Carolina, USA
| | - Cynthia A Toth
- Department of Biomedical Engineering, Duke University, Durham, North Carolina, USA; Department of Ophthalmology, Duke University School of Medicine, Durham, North Carolina, USA
| | - Lejla Vajzovic
- Department of Ophthalmology, Duke University School of Medicine, Durham, North Carolina, USA
| | - Bin Deng
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Sina Farsiu
- Department of Biomedical Engineering, Duke University, Durham, North Carolina, USA; Department of Ophthalmology, Duke University School of Medicine, Durham, North Carolina, USA.
| |
Collapse
|
7
|
Raja H, Hassan T, Akram MU, Werghi N. Clinically Verified Hybrid Deep Learning System for Retinal Ganglion Cells Aware Grading of Glaucomatous Progression. IEEE Trans Biomed Eng 2020; 68:2140-2151. [PMID: 33044925 DOI: 10.1109/tbme.2020.3030085] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
OBJECTIVE Glaucoma is the second leading cause of blindness worldwide. Glaucomatous progression can be easily monitored by analyzing the degeneration of retinal ganglion cells (RGCs). Many researchers have screened glaucoma by measuring cup-to-disc ratios from fundus and optical coherence tomography scans. However, this paper presents a novel strategy that pays attention to the RGC atrophy for screening glaucomatous pathologies and grading their severity. METHODS The proposed framework encompasses a hybrid convolutional network that extracts the retinal nerve fiber layer, ganglion cell with the inner plexiform layer and ganglion cell complex regions, allowing thus a quantitative screening of glaucomatous subjects. Furthermore, the severity of glaucoma in screened cases is objectively graded by analyzing the thickness of these regions. RESULTS The proposed framework is rigorously tested on publicly available Armed Forces Institute of Ophthalmology (AFIO) dataset, where it achieved the F1 score of 0.9577 for diagnosing glaucoma, a mean dice coefficient score of 0.8697 for extracting the RGC regions and an accuracy of 0.9117 for grading glaucomatous progression. Furthermore, the performance of the proposed framework is clinically verified with the markings of four expert ophthalmologists, achieving a statistically significant Pearson correlation coefficient of 0.9236. CONCLUSION An automated assessment of RGC degeneration yields better glaucomatous screening and grading as compared to the state-of-the-art solutions. SIGNIFICANCE An RGC-aware system not only screens glaucoma but can also grade its severity and here we present an end-to-end solution that is thoroughly evaluated on a standardized dataset and is clinically validated for analyzing glaucomatous pathologies.
Collapse
|
8
|
Infrared retinal images for flashless detection of macular edema. Sci Rep 2020; 10:14384. [PMID: 32873818 PMCID: PMC7463268 DOI: 10.1038/s41598-020-71010-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2020] [Accepted: 08/07/2020] [Indexed: 11/08/2022] Open
Abstract
This study evaluates the use of infrared (IR) images of the retina, obtained without flashes of light, for machine-based detection of macular oedema (ME). A total of 41 images of 21 subjects, here with 23 cases and 18 controls, were studied. Histogram and gray-level co-occurrence matrix (GLCM) parameters were extracted from the IR retinal images. The diagnostic performance of the histogram and GLCM parameters was calculated in hindsight based on the known labels of each image. The results from the one-way ANOVA indicated there was a significant difference between ME eyes and the controls when using GLCM features, with the correlation feature having the highest area under the curve (AUC) (AZ) value. The performance of the proposed method was also evaluated using a support vector machine (SVM) classifier that gave sensitivity and specificity of 100%. This research shows that the texture of the IR images of the retina has a significant difference between ME eyes and the controls and that it can be considered for machine-based detection of ME without requiring flashes of light.
Collapse
|
9
|
Zhu Y, Gao W, Guo Z, Zhou Y, Zhou Y. Liver tissue classification of en face images by fractal dimension-based support vector machine. JOURNAL OF BIOPHOTONICS 2020; 13:e201960154. [PMID: 31909553 DOI: 10.1002/jbio.201960154] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/16/2019] [Revised: 12/16/2019] [Accepted: 12/30/2019] [Indexed: 06/10/2023]
Abstract
Full-field optical coherence tomography (FF-OCT) has been reported with its label-free subcellular imaging performance. To realize quantitive cancer detection, the support vector machine model of classifying normal and cancerous human liver tissue is proposed with en face tomographic images. Twenty samples (10 normal and 10 cancerous) were operated from humans and composed of 285 en face tomographic images. Six histogram features and one proposed fractal dimension parameter that reveal the refractive index inhomogeneities of tissue were extracted and made up the training set. The other different 16 samples (8 normal and 8 cancerous) were imaged (190 images) and employed as the test set with the same features. First, a subcellular-resolution tomographic image library for four histopathological areas in liver tissue was established. Second, the area under the receiver operating characteristics of 0.9378, 0.9858, 0.9391, 0.9517 for prediction of the cancerous hepatic cell, central vein, fibrosis, and portal vein were measured with the test set. The results indicate that the proposed classifier from FF-OCT images shows promise as a label-free assessment of quantified tumor detection, suggesting the fractal dimension-based classifier could aid clinicians in detecting tumor boundaries for resection in surgery in the future.
Collapse
Affiliation(s)
- Yue Zhu
- Nanjing University of Science and Technology, Department of Optical Engineering, Nanjing, China
| | - Wanrong Gao
- Nanjing University of Science and Technology, Department of Optical Engineering, Nanjing, China
| | - Zhenyan Guo
- Nanjing University of Science and Technology, Department of Optical Engineering, Nanjing, China
| | - Yawen Zhou
- Nanjing University of Science and Technology, Department of Optical Engineering, Nanjing, China
| | - Yuan Zhou
- Nanjing University, Medical School of Nanjing University, Nanjing, China
| |
Collapse
|
10
|
Quellec G, Kowal J, Hasler PW, Scholl HPN, Zweifel S, Konstantinos B, Carvalho JER, Heeren T, Egan C, Tufail A, Maloca PM. Feasibility of support vector machine learning in age-related macular degeneration using small sample yielding sparse optical coherence tomography data. Acta Ophthalmol 2019; 97:e719-e728. [PMID: 30839157 DOI: 10.1111/aos.14055] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2018] [Accepted: 01/19/2019] [Indexed: 12/31/2022]
Abstract
PURPOSE A retrospective pilot study is conducted to demonstrate the utility of a novel support vector machine learning (SVML) algorithm in a small three-dimensional (3D) sample yielding sparse optical coherence tomography (spOCT) data for the automatic monitoring of neovascular (wet) age-related macular degeneration (wAMD). METHODS From the anti-vascular endothelial growth factor injection database, 588 consecutive pairs of OCT volumes (57.624 B-scans) were selected in 70 randomly chosen wAMD patients treated with ranibizumab. The SVML algorithm was applied to 183 OCT volume pairs (17.934 B-scans) in 30 patients. Four independent, diagnosis-blinded retina specialists indicated whether wAMD activity was present between 100 pairs of consecutive OCT volumes (9800 B-scans) in the remaining 40 patients for comparison with the SVML algorithm and a non-complex baseline algorithm using only retinal thickness. The SVML algorithm was assessed using inter-observer variability and receiver operating characteristic (ROC) analyses. RESULTS The retina specialists showed an average Cohen's κ of 0.57 ± 0.13 (minimum: 0.41, maximum: 0.83). The average κ between the proposed algorithm and the retina specialists was 0.62 ± 0.05 and 0.43 ± 0.14 between the baseline algorithm and the retina specialists. Using each of the four retina specialists as the reference, the proposed method showed a superior area under the ROC curve of 0.91 ± 0.03 compared to the ROC 0.81 ± 0.05 shown by the baseline algorithm. CONCLUSION The SVML algorithm was as effective as the retina specialists were in detecting activity in wAMD. Support vector machine learning (SVML) may be a useful monitoring tool in wAMD suited for small samples that yield sparse OCT data possibly derived from self-measuring OCT-robots.
Collapse
Affiliation(s)
- Gwenolé Quellec
- ARTORG Centre for Biomedical Engineering Research University of Bern Bern Switzerland
- Inserm, UMR 1101 Brest France
| | - Jens Kowal
- ARTORG Centre for Biomedical Engineering Research University of Bern Bern Switzerland
| | - Pascal W. Hasler
- OCTlab Department of Ophthalmology University of Basel Basel Switzerland
- Department of Ophthalmology University of Basel Basel Switzerland
| | - Hendrik P. N. Scholl
- Department of Ophthalmology University of Basel Basel Switzerland
- Institute of Molecular and Clinical Ophthalmology Basel (IOB) Basel Switzerland
- Wilmer Eye Institute Johns Hopkins University Baltimore Maryland USA
| | - Sandrine Zweifel
- Department of Ophthalmology University Hospital Zurich Zurich Switzerland
| | | | | | | | - Catherine Egan
- Moorfields Eye Hospital NHS Trust Institute of Ophthalmology UCL London UK
| | - Adnan Tufail
- Moorfields Eye Hospital NHS Trust Institute of Ophthalmology UCL London UK
| | - Peter M. Maloca
- OCTlab Department of Ophthalmology University of Basel Basel Switzerland
- Department of Ophthalmology University of Basel Basel Switzerland
- Institute of Molecular and Clinical Ophthalmology Basel (IOB) Basel Switzerland
- Moorfields Eye Hospital NHS Trust Institute of Ophthalmology UCL London UK
| |
Collapse
|
11
|
Rong Y, Xiang D, Zhu W, Shi F, Gao E, Fan Z, Chen X. Deriving external forces via convolutional neural networks for biomedical image segmentation. BIOMEDICAL OPTICS EXPRESS 2019; 10:3800-3814. [PMID: 31452976 PMCID: PMC6701547 DOI: 10.1364/boe.10.003800] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/15/2019] [Revised: 06/26/2019] [Accepted: 06/27/2019] [Indexed: 05/07/2023]
Abstract
Active contours, or snakes, are widely applied on biomedical image segmentation. They are curves defined within an image domain that can move to object boundaries under the influence of internal forces and external forces, in which the internal forces are generally computed from curves themselves and external forces from image data. Designing external forces properly is a key point with active contour algorithms since the external forces play a leading role in the evolution of active contours. One of most popular external forces for active contour models is gradient vector flow (GVF). However, GVF is sensitive to noise and false edges, which limits its application area. To handle this problem, in this paper, we propose using GVF as reference to train a convolutional neural network to derive an external force. The derived external force is then integrated into the active contour models for curve evolution. Three clinical applications, segmentation of optic disk in fundus images, fluid in retinal optical coherence tomography images and fetal head in ultrasound images, are employed to evaluate the proposed method. The results show that the proposed method is very promising since it achieves competitive performance for different tasks compared to the state-of-the-art algorithms.
Collapse
Affiliation(s)
- Yibiao Rong
- School of Electrical and Information Engineering, Soochow University, 215006, Suzhou, China
- Contributed equally to this work
| | - Dehui Xiang
- School of Electrical and Information Engineering, Soochow University, 215006, Suzhou, China
- Contributed equally to this work
| | - Weifang Zhu
- School of Electrical and Information Engineering, Soochow University, 215006, Suzhou, China
| | - Fei Shi
- School of Electrical and Information Engineering, Soochow University, 215006, Suzhou, China
| | - Enting Gao
- School of Electronic and Information Engineering, Suzhou University of Science and Technology, Suzhou, China
| | - Zhun Fan
- Key Laboratory of Digital Signal and Image Processing of Guangdong Provincial, College of Engineering, Shantou University, 515063, Shantou, China
| | - Xinjian Chen
- School of Electrical and Information Engineering, Soochow University, 215006, Suzhou, China
- State Key Laboratory of Radiation Medicine and Protection, Soochow University, 215123, Suzhou, China
| |
Collapse
|
12
|
Hassan T, Akram MU, Masood MF, Yasin U. Deep structure tensor graph search framework for automated extraction and characterization of retinal layers and fluid pathology in retinal SD-OCT scans. Comput Biol Med 2019; 105:112-124. [DOI: 10.1016/j.compbiomed.2018.12.015] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2018] [Revised: 12/25/2018] [Accepted: 12/29/2018] [Indexed: 12/01/2022]
|
13
|
Chua J, Tan B, Ang M, Nongpiur ME, Tan AC, Najjar RP, Milea D, Schmetterer L. Future clinical applicability of optical coherence tomography angiography. Clin Exp Optom 2018; 102:260-269. [PMID: 30537233 DOI: 10.1111/cxo.12854] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2018] [Revised: 09/28/2018] [Accepted: 10/12/2018] [Indexed: 12/28/2022] Open
Abstract
Optical coherence tomography angiography (OCT-A) is an emerging technology that allows for the non-invasive imaging of the ocular microvasculature. Despite the wealth of observations and numerous research studies illustrating the potential clinical uses of OCT-A, this technique is currently rarely used in routine clinical settings. In this review, technical and clinical aspects of OCT-A imaging are discussed, and the future clinical potential of OCT-A is considered. An understanding of the basic principles and limitations of OCT-A technology will better inform clinicians of its future potential in the diagnosis and management of ocular diseases.
Collapse
Affiliation(s)
- Jacqueline Chua
- Ocular Imaging Group, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Eye, Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Bingyao Tan
- Ocular Imaging Group, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Marcus Ang
- Ocular Imaging Group, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Eye, Academic Clinical Program, Duke-NUS Medical School, Singapore.,External Disease and Cornea Service, Moorfields Eye Hospital, London, UK
| | - Monisha E Nongpiur
- Ocular Imaging Group, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Eye, Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Anna Cs Tan
- Ocular Imaging Group, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Eye, Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Raymond P Najjar
- Ocular Imaging Group, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Eye, Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Dan Milea
- Ocular Imaging Group, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Eye, Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Leopold Schmetterer
- Ocular Imaging Group, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Eye, Academic Clinical Program, Duke-NUS Medical School, Singapore.,Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore
| |
Collapse
|
14
|
Applications of Artificial Intelligence in Ophthalmology: General Overview. J Ophthalmol 2018; 2018:5278196. [PMID: 30581604 PMCID: PMC6276430 DOI: 10.1155/2018/5278196] [Citation(s) in RCA: 56] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2018] [Revised: 10/06/2018] [Accepted: 10/17/2018] [Indexed: 12/26/2022] Open
Abstract
With the emergence of unmanned plane, autonomous vehicles, face recognition, and language processing, the artificial intelligence (AI) has remarkably revolutionized our lifestyle. Recent studies indicate that AI has astounding potential to perform much better than human beings in some tasks, especially in the image recognition field. As the amount of image data in imaging center of ophthalmology is increasing dramatically, analyzing and processing these data is in urgent need. AI has been tried to apply to decipher medical data and has made extraordinary progress in intelligent diagnosis. In this paper, we presented the basic workflow for building an AI model and systematically reviewed applications of AI in the diagnosis of eye diseases. Future work should focus on setting up systematic AI platforms to diagnose general eye diseases based on multimodal data in the real world.
Collapse
|
15
|
Hassan T, Akram MU, Akhtar M, Khan SA, Yasin U. Multilayered Deep Structure Tensor Delaunay Triangulation and Morphing Based Automated Diagnosis and 3D Presentation of Human Macula. J Med Syst 2018; 42:223. [PMID: 30284052 DOI: 10.1007/s10916-018-1078-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2018] [Accepted: 09/19/2018] [Indexed: 10/28/2022]
Abstract
Maculopathy is the group of diseases that affects central vision of a person and they are often associated with diabetes. Many researchers reported automated diagnosis of maculopathy from optical coherence tomography (OCT) images. However, to the best of our knowledge there is no literature that presents a complete 3D suite for the extraction as well as diagnosis of macula. Therefore, this paper presents a multilayered convolutional neural networks (CNN) structure tensor Delaunay triangulation and morphing based fully autonomous system that extracts up to nine retinal and choroidal layers along with the macular fluids. Furthermore, the proposed system utilizes the extracted retinal information for the automated diagnosis of maculopathy as well as for the robust reconstruction of 3D macula of retina. The proposed system has been validated on 41,921 retinal OCT scans acquired from different OCT machines and it significantly outperformed existing state of the art solutions by achieving the mean accuracy of 95.27% for extracting retinal and choroidal layers, mean dice coefficient of 0.90 for extracting fluid pathology and the overall accuracy of 96.07% for maculopathy diagnosis. To the best of our knowledge, the proposed framework is first of its kind that provides a fully automated and complete 3D integrated solution for the extraction of candidate macula along with its fully automated diagnosis against different macular syndromes.
Collapse
Affiliation(s)
- Taimur Hassan
- Department of Computer & Software Engineering, National University of Sciences and Technology (NUST), Islamabad, 44000, Pakistan.,Department of Electrical Engineering, Bahria University, Islamabad, 44000, Pakistan
| | - M Usman Akram
- Department of Computer & Software Engineering, National University of Sciences and Technology (NUST), Islamabad, 44000, Pakistan.
| | - Mahmood Akhtar
- School of Civil and Environmental Engineering's Research Centre for Integrated Transport Innovation (rCITI), University of New South Wales, Sydney, Australia
| | - Shoab Ahmad Khan
- Department of Computer & Software Engineering, National University of Sciences and Technology (NUST), Islamabad, 44000, Pakistan
| | - Ubaidullah Yasin
- Department of Ophthalmology, Armed Forces Institute of Ophthalmology, Rawalpindi, Pakistan
| |
Collapse
|
16
|
Wu M, Fan W, Chen Q, Du Z, Li X, Yuan S, Park H. Three-dimensional continuous max flow optimization-based serous retinal detachment segmentation in SD-OCT for central serous chorioretinopathy. BIOMEDICAL OPTICS EXPRESS 2017; 8:4257-4274. [PMID: 28966863 PMCID: PMC5611939 DOI: 10.1364/boe.8.004257] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2017] [Revised: 07/29/2017] [Accepted: 08/22/2017] [Indexed: 05/28/2023]
Abstract
Assessment of serous retinal detachment plays an important role in the diagnosis of central serous chorioretinopathy (CSC). In this paper, we propose an automatic, three-dimensional segmentation method to detect both neurosensory retinal detachment (NRD) and pigment epithelial detachment (PED) in spectral domain optical coherence tomography (SD-OCT) images. The proposed method involves constructing a probability map from training samples using random forest classification. The probability map is constructed from a linear combination of structural texture, intensity, and layer thickness information. Then, a continuous max flow optimization algorithm is applied to the probability map to segment the retinal detachment-associated fluid regions. Experimental results from 37 retinal SD-OCT volumes from cases of CSC demonstrate the proposed method can achieve a true positive volume fraction (TPVF), false positive volume fraction (FPVF), positive predicative value (PPV), and dice similarity coefficient (DSC) of 92.1%, 0.53%, 94.7%, and 93.3%, respectively, for NRD segmentation and 92.5%, 0.14%, 80.9%, and 84.6%, respectively, for PED segmentation. The proposed method can be an automatic tool to evaluate serous retinal detachment and has the potential to improve the clinical evaluation of CSC.
Collapse
Affiliation(s)
- Menglin Wu
- School of Computer Science and Technology, Nanjing Tech University, Nanjing, China
- These authors contributed equally to this manuscript
| | - Wen Fan
- Department of Ophthalmology, First Affiliated Hospital with Nanjing Medical University, Nanjing, China
- These authors contributed equally to this manuscript
| | - Qiang Chen
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Zhenlong Du
- School of Computer Science and Technology, Nanjing Tech University, Nanjing, China
| | - Xiaoli Li
- School of Computer Science and Technology, Nanjing Tech University, Nanjing, China
| | - Songtao Yuan
- Department of Ophthalmology, First Affiliated Hospital with Nanjing Medical University, Nanjing, China
| | - Hyunjin Park
- School of Electronic and Electrical Engineering, Sungkyunkwan University, South Korea
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), South Korea
| |
Collapse
|
17
|
Fang L, Yang L, Li S, Rabbani H, Liu Z, Peng Q, Chen X. Automatic detection and recognition of multiple macular lesions in retinal optical coherence tomography images with multi-instance multilabel learning. JOURNAL OF BIOMEDICAL OPTICS 2017; 22:66014. [PMID: 28655052 DOI: 10.1117/1.jbo.22.6.066014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2017] [Accepted: 06/02/2017] [Indexed: 06/07/2023]
Abstract
Detection and recognition of macular lesions in optical coherence tomography (OCT) are very important for retinal diseases diagnosis and treatment. As one kind of retinal disease (e.g., diabetic retinopathy) may contain multiple lesions (e.g., edema, exudates, and microaneurysms) and eye patients may suffer from multiple retinal diseases, multiple lesions often coexist within one retinal image. Therefore, one single-lesion-based detector may not support the diagnosis of clinical eye diseases. To address this issue, we propose a multi-instance multilabel-based lesions recognition (MIML-LR) method for the simultaneous detection and recognition of multiple lesions. The proposed MIML-LR method consists of the following steps: (1) segment the regions of interest (ROIs) for different lesions, (2) compute descriptive instances (features) for each lesion region, (3) construct multilabel detectors, and (4) recognize each ROI with the detectors. The proposed MIML-LR method was tested on 823 clinically labeled OCT images with normal macular and macular with three common lesions: epiretinal membrane, edema, and drusen. For each input OCT image, our MIML-LR method can automatically identify the number of lesions and assign the class labels, achieving the average accuracy of 88.72% for the cases with multiple lesions, which better assists macular disease diagnosis and treatment.
Collapse
Affiliation(s)
- Leyuan Fang
- Hunan University, College of Electrical and Information Engineering, Changsha, Hunan, China
| | - Liumao Yang
- Hunan University, College of Electrical and Information Engineering, Changsha, Hunan, China
| | - Shutao Li
- Hunan University, College of Electrical and Information Engineering, Changsha, Hunan, China
| | - Hossein Rabbani
- Isfahan University of Medical Sciences, Medical Image and Signal Processing Research Center, Isfahan, Iran
| | - Zhimin Liu
- The First Affiliated Hospital of Hunan University of Chinese Medicine, Department of Ophthalmology, Changsha, Hunan, China
| | - Qinghua Peng
- The First Affiliated Hospital of Hunan University of Chinese Medicine, Department of Ophthalmology, Changsha, Hunan, China
| | - Xiangdong Chen
- The First Affiliated Hospital of Hunan University of Chinese Medicine, Department of Ophthalmology, Changsha, Hunan, China
| |
Collapse
|
18
|
Wu M, Chen Q, He X, Li P, Fan W, Yuan S, Park H. Automatic Subretinal Fluid Segmentation of Retinal SD-OCT Images With Neurosensory Retinal Detachment Guided by Enface Fundus Imaging. IEEE Trans Biomed Eng 2017; 65:87-95. [PMID: 28436839 DOI: 10.1109/tbme.2017.2695461] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE Accurate segmentation of neurosensory retinal detachment (NRD) associated subretinal fluid in spectral domain optical coherence tomography (SD-OCT) is vital for the assessment of central serous chorioretinopathy (CSC). A novel two-stage segmentation algorithm was proposed, guided by Enface fundus imaging. METHODS In the first stage, Enface fundus image was segmented using thickness map prior to detecting the fluid-associated abnormalities with diffuse boundaries. In the second stage, the locations of the abnormalities were used to restrict the spatial extent of the fluid region, and a fuzzy level set method with a spatial smoothness constraint was applied to subretinal fluid segmentation in the SD-OCT scans. RESULTS Experimental results from 31 retinal SD-OCT volumes with CSC demonstrate that our method can achieve a true positive volume fraction (TPVF), false positive volume fraction (FPVF), and positive predicative value (PPV) of 94.3%, 0.97%, and 93.6%, respectively, for NRD regions. Our approach can also discriminate NRD-associated subretinal fluid from subretinal pigment epithelium fluid associated with pigment epithelial detachment with a TPVF, FPVF, and PPV of 93.8%, 0.40%, and 90.5%, respectively. CONCLUSION We report a fully automatic method for the segmentation of subretinal fluid. SIGNIFICANCE Our method shows the potential to improve clinical therapy for CSC.
Collapse
|
19
|
Hassan B, Raja G, Hassan T, Usman Akram M. Structure tensor based automated detection of macular edema and central serous retinopathy using optical coherence tomography images. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2016; 33:455-63. [PMID: 27140751 DOI: 10.1364/josaa.33.000455] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
Macular edema (ME) and central serous retinopathy (CSR) are two macular diseases that affect the central vision of a person if they are left untreated. Optical coherence tomography (OCT) imaging is the latest eye examination technique that shows a cross-sectional region of the retinal layers and that can be used to detect many retinal disorders in an early stage. Many researchers have done clinical studies on ME and CSR and reported significant findings in macular OCT scans. However, this paper proposes an automated method for the classification of ME and CSR from OCT images using a support vector machine (SVM) classifier. Five distinct features (three based on the thickness profiles of the sub-retinal layers and two based on cyst fluids within the sub-retinal layers) are extracted from 30 labeled images (10 ME, 10 CSR, and 10 healthy), and SVM is trained on these. We applied our proposed algorithm on 90 time-domain OCT (TD-OCT) images (30 ME, 30 CSR, 30 healthy) of 73 patients. Our algorithm correctly classified 88 out of 90 subjects with accuracy, sensitivity, and specificity of 97.77%, 100%, and 93.33%, respectively.
Collapse
|