1
|
Zhu Z, Wang Y, Qi Z, Hu W, Zhang X, Wagner SK, Wang Y, Ran AR, Ong J, Waisberg E, Masalkhi M, Suh A, Tham YC, Cheung CY, Yang X, Yu H, Ge Z, Wang W, Sheng B, Liu Y, Lee AG, Denniston AK, Wijngaarden PV, Keane PA, Cheng CY, He M, Wong TY. Oculomics: Current concepts and evidence. Prog Retin Eye Res 2025; 106:101350. [PMID: 40049544 DOI: 10.1016/j.preteyeres.2025.101350] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2024] [Revised: 03/03/2025] [Accepted: 03/03/2025] [Indexed: 03/20/2025]
Abstract
The eye provides novel insights into general health, as well as pathogenesis and development of systemic diseases. In the past decade, growing evidence has demonstrated that the eye's structure and function mirror multiple systemic health conditions, especially in cardiovascular diseases, neurodegenerative disorders, and kidney impairments. This has given rise to the field of oculomics-the application of ophthalmic biomarkers to understand mechanisms, detect and predict disease. The development of this field has been accelerated by three major advances: 1) the availability and widespread clinical adoption of high-resolution and non-invasive ophthalmic imaging ("hardware"); 2) the availability of large studies to interrogate associations ("big data"); 3) the development of novel analytical methods, including artificial intelligence (AI) ("software"). Oculomics offers an opportunity to enhance our understanding of the interplay between the eye and the body, while supporting development of innovative diagnostic, prognostic, and therapeutic tools. These advances have been further accelerated by developments in AI, coupled with large-scale linkage datasets linking ocular imaging data with systemic health data. Oculomics also enables the detection, screening, diagnosis, and monitoring of many systemic health conditions. Furthermore, oculomics with AI allows prediction of the risk of systemic diseases, enabling risk stratification, opening up new avenues for prevention or individualized risk prediction and prevention, facilitating personalized medicine. In this review, we summarise current concepts and evidence in the field of oculomics, highlighting the progress that has been made, remaining challenges, and the opportunities for future research.
Collapse
Affiliation(s)
- Zhuoting Zhu
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, VIC, Australia; Department of Surgery (Ophthalmology), University of Melbourne, Melbourne, VIC, Australia.
| | - Yueye Wang
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
| | - Ziyi Qi
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, VIC, Australia; Department of Surgery (Ophthalmology), University of Melbourne, Melbourne, VIC, Australia; Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Eye Diseases, Shanghai, China
| | - Wenyi Hu
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, VIC, Australia; Department of Surgery (Ophthalmology), University of Melbourne, Melbourne, VIC, Australia
| | - Xiayin Zhang
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Siegfried K Wagner
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK; Institute of Ophthalmology, University College London, London, UK
| | - Yujie Wang
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, VIC, Australia; Department of Surgery (Ophthalmology), University of Melbourne, Melbourne, VIC, Australia
| | - An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Joshua Ong
- Department of Ophthalmology and Visual Sciences, University of Michigan Kellogg Eye Center, Ann Arbor, USA
| | - Ethan Waisberg
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
| | - Mouayad Masalkhi
- University College Dublin School of Medicine, Belfield, Dublin, Ireland
| | - Alex Suh
- Tulane University School of Medicine, New Orleans, LA, USA
| | - Yih Chung Tham
- Department of Ophthalmology and Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Xiaohong Yang
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Honghua Yu
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Zongyuan Ge
- Monash e-Research Center, Faculty of Engineering, Airdoc Research, Nvidia AI Technology Research Center, Monash University, Melbourne, VIC, Australia
| | - Wei Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yun Liu
- Google Research, Mountain View, CA, USA
| | - Andrew G Lee
- Center for Space Medicine and the Department of Ophthalmology, Baylor College of Medicine, Houston, USA; Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, USA; The Houston Methodist Research Institute, Houston Methodist Hospital, Houston, USA; Departments of Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, USA; Department of Ophthalmology, University of Texas Medical Branch, Galveston, USA; University of Texas MD Anderson Cancer Center, Houston, USA; Texas A&M College of Medicine, Bryan, USA; Department of Ophthalmology, The University of Iowa Hospitals and Clinics, Iowa City, USA
| | - Alastair K Denniston
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK; Institute of Ophthalmology, University College London, London, UK; National Institute for Health and Care Research (NIHR) Birmingham Biomedical Research Centre (BRC), University Hospital Birmingham and University of Birmingham, Birmingham, UK; University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK; Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK; Birmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, UK
| | - Peter van Wijngaarden
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, VIC, Australia; Department of Surgery (Ophthalmology), University of Melbourne, Melbourne, VIC, Australia; Florey Institute of Neuroscience and Mental Health, University of Melbourne, Parkville, VIC, Australia
| | - Pearse A Keane
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK; Institute of Ophthalmology, University College London, London, UK
| | - Ching-Yu Cheng
- Department of Ophthalmology and Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Mingguang He
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China; Research Centre for SHARP Vision (RCSV), The Hong Kong Polytechnic University, Kowloon, Hong Kong, China; Centre for Eye and Vision Research (CEVR), 17W Hong Kong Science Park, Hong Kong, China
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Tsinghua Medicine, Tsinghua University, Beijing, China.
| |
Collapse
|
2
|
Wang Q, Xiao Y, Ma X. Optimizing visible retinal area in pediatric ultra-widefield fundus imaging: The effectiveness of mydriasis and eyelid lifting. Photodiagnosis Photodyn Ther 2025; 52:104532. [PMID: 40015615 DOI: 10.1016/j.pdpdt.2025.104532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2024] [Revised: 02/23/2025] [Accepted: 02/24/2025] [Indexed: 03/01/2025]
Abstract
BACKGROUND The increase in visible retinal area (VRA) enhances the detection of peripheral retinal pathologies. This study aims to maximize the VRA in children using ultra-widefield (UWF) fundus imaging. METHODS This cross-sectional, observational study included 53 children (106 eyes) who underwent examination in the Ophthalmology Department of Zhoupu Hospital from February to October 2023. Fundus images were captured using the ultra-widefield Optos imaging system (Daytona P200T). Parameters such as uncorrected visual acuity (UCVA), Spherical equivalent refraction (SER), axial length (AL), non-contact tonometry (NCT), and pupil diameters (both undilated and dilated) were measured. A custom image segmentation tool based on deep learning technology was used to quantify the VRA. The eyes were categorized into four groups: undilated without eyelid lifting, undilated with eyelid lifting, dilated without eyelid lifting, and dilated with eyelid lifting. RESULTS Significant differences in VRA between the four groups (χ² = 79.686, P < 0.001). Mydriasis increased VRA by 8.4 % (P = 0.001), eyelid lifting increased VRA by 18.1 % (P < 0.001), and combining both increased VRA by 20 % (P < 0.001). UCVA was negatively correlated with VRA in the undilated condition without eyelid lifting (r= -0.237, P = 0.014), SER was negatively correlated with VRA and AL was positively correlated with VRA under dilation without eyelid lifting (r = -0.310, P = 0.001; r = 0.264, P = 0.006). CONCLUSION Combining mydriasis with manual eyelid lifting significantly enhances the VRA in UWF fundus imaging, effectively mitigating the effects of artifacts caused by eyelashes and eyelids. This technique improves the detection rate of peripheral retinal pathologies.
Collapse
Affiliation(s)
- Qingxia Wang
- Graduate School of Shanghai University of Traditional Chinese Medicine, Shanghai 201203, China; Department of Ophthalmology, Shanghai University of Medicine and Health Sciences Affiliated Zhoupu Hospital, Shanghai 201318, China
| | - Yuanyuan Xiao
- Department of Ophthalmology, Shanghai University of Medicine and Health Sciences Affiliated Zhoupu Hospital, Shanghai 201318, China
| | - Xiaoyun Ma
- Graduate School of Shanghai University of Traditional Chinese Medicine, Shanghai 201203, China; Department of Ophthalmology, Shanghai University of Medicine and Health Sciences Affiliated Zhoupu Hospital, Shanghai 201318, China.
| |
Collapse
|
3
|
Belhadi A, Djenouri Y, Belbachir AN. Ensemble fuzzy deep learning for brain tumor detection. Sci Rep 2025; 15:6124. [PMID: 39972098 PMCID: PMC11840070 DOI: 10.1038/s41598-025-90572-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2024] [Accepted: 02/13/2025] [Indexed: 02/21/2025] Open
Abstract
This research presents a novel ensemble fuzzy deep learning approach for brain Magnetic Resonance Imaging (MRI) analysis, aiming to improve the segmentation of brain tissues and abnormalities. The method integrates multiple components, including diverse deep learning architectures enhanced with volumetric fuzzy pooling, a model fusion strategy, and an attention mechanism to focus on the most relevant regions of the input data. The process begins by collecting medical data using sensors to acquire MRI images. These data are then used to train several deep learning models that are specifically designed to handle various aspects of brain MRI segmentation. To enhance the model's performance, an efficient ensemble learning method is employed to combine the predictions of multiple models, ensuring that the final decision accounts for different strengths of each individual model. A key feature of the approach is the construction of a knowledge base that stores data from training images and associates it with the most suitable model for each specific sample. During the inference phase, this knowledge base is consulted to quickly identify and select the best model for processing new test images, based on the similarity between the test data and previously encountered samples. The proposed method is rigorously tested on real-world brain MRI segmentation benchmarks, demonstrating superior performance in comparison to existing techniques. Our proposed method achieves an Intersection over Union (IoU) of 95% on the complete Brain MRI Segmentation dataset, demonstrating a 10% improvement over baseline solutions.
Collapse
Affiliation(s)
| | - Youcef Djenouri
- Department of MicroSystems, University of South-Eastern Norway, Kongsberg, Norway.
| | | |
Collapse
|
4
|
Simeri A, Pezzi G, Arena R, Papalia G, Szili-Torok T, Greco R, Veltri P, Greco G, Pezzi V, Provenzano M, Zaza G. Artificial intelligence in chronic kidney diseases: methodology and potential applications. Int Urol Nephrol 2025; 57:159-168. [PMID: 39052168 PMCID: PMC11695560 DOI: 10.1007/s11255-024-04165-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2024] [Accepted: 07/17/2024] [Indexed: 07/27/2024]
Abstract
Chronic kidney disease (CKD) represents a significant global health challenge, characterized by kidney damage and decreased function. Its prevalence has steadily increased, necessitating a comprehensive understanding of its epidemiology, risk factors, and management strategies. While traditional prognostic markers such as estimated glomerular filtration rate (eGFR) and albuminuria provide valuable insights, they may not fully capture the complexity of CKD progression and associated cardiovascular (CV) risks.This paper reviews the current state of renal and CV risk prediction in CKD, highlighting the limitations of traditional models and the potential for integrating artificial intelligence (AI) techniques. AI, particularly machine learning (ML) and deep learning (DL), offers a promising avenue for enhancing risk prediction by analyzing vast and diverse patient data, including genetic markers, biomarkers, and imaging. By identifying intricate patterns and relationships within datasets, AI algorithms can generate more comprehensive risk profiles, enabling personalized and nuanced risk assessments.Despite its potential, the integration of AI into clinical practice faces challenges such as the opacity of some algorithms and concerns regarding data quality, privacy, and bias. Efforts towards explainable AI (XAI) and rigorous data governance are essential to ensure transparency, interpretability, and trustworthiness in AI-driven predictions.
Collapse
Affiliation(s)
- Andrea Simeri
- Department of Mathematics and Computer Science, University of Calabria, 87036, Rende, CS, Italy
| | - Giuseppe Pezzi
- Department of Medical and Surgical Sciences, University of Catanzaro, 88100, Catanzaro, Italy
| | - Roberta Arena
- Nephrology, Dialysis and Renal Transplant Unit, Department of Pharmacy, Health and Nutritional Sciences, University of Calabria, Rende - Hospital 'SS. Annunziata', Cosenza, Italy
| | - Giuliana Papalia
- Nephrology, Dialysis and Renal Transplant Unit, Department of Pharmacy, Health and Nutritional Sciences, University of Calabria, Rende - Hospital 'SS. Annunziata', Cosenza, Italy
| | - Tamas Szili-Torok
- Division of Nephrology, Department of Internal Medicine, University Medical Center Groningen, Groningen, the Netherlands
| | - Rosita Greco
- Nephrology, Dialysis and Renal Transplant Unit, Department of Pharmacy, Health and Nutritional Sciences, University of Calabria, Rende - Hospital 'SS. Annunziata', Cosenza, Italy
| | - Pierangelo Veltri
- Department of Computer Science, Modeling, Electronics and Systems Engineering, University of Calabria, 87036, Rende, CS, Italy
| | - Gianluigi Greco
- Department of Mathematics and Computer Science, University of Calabria, 87036, Rende, CS, Italy
| | - Vincenzo Pezzi
- Nephrology, Dialysis and Renal Transplant Unit, Department of Pharmacy, Health and Nutritional Sciences, University of Calabria, Rende - Hospital 'SS. Annunziata', Cosenza, Italy
| | - Michele Provenzano
- Nephrology, Dialysis and Renal Transplant Unit, Department of Pharmacy, Health and Nutritional Sciences, University of Calabria, Rende - Hospital 'SS. Annunziata', Cosenza, Italy.
| | - Gianluigi Zaza
- Nephrology, Dialysis and Renal Transplant Unit, Department of Pharmacy, Health and Nutritional Sciences, University of Calabria, Rende - Hospital 'SS. Annunziata', Cosenza, Italy
| |
Collapse
|
5
|
Xu Y, Sun R, Hu M, Zeng H. A Dual-Modal Fusion Network Using Optical Coherence Tomography and Fundus Images in Detection of Glaucomatous Optic Neuropathy. Curr Eye Res 2024; 49:1253-1259. [PMID: 38979787 DOI: 10.1080/02713683.2024.2375401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2024] [Accepted: 06/27/2024] [Indexed: 07/10/2024]
Abstract
PURPOSE We designed a dual-modal fusion network to detect glaucomatous optic neuropathy, which utilized both retinal nerve fiber layer thickness from optical coherence tomography reports and fundus images. METHODS A total of 327 healthy subjects (410 eyes) and 87 glaucomatous optic neuropathy patients (113 eyes) were included. The retinal nerve fiber layer thickness from optical coherence tomography reports and fundus images were used as predictors in the dual-modal fusion network to diagnose glaucoma. The area under the receiver operation characteristic curve, accuracy, sensitivity, and specificity were measured to compare our method and other approaches. RESULTS The accuracy of our dual-modal fusion network using both retinal nerve fiber layer thickness from optical coherence tomography reports and fundus images was 0.935 and we achieved a significant larger area under the receiver operation characteristic curve of our method with 0.968 (95% confidence interval, 0.937-0.999). For only using retinal nerve fiber layer thickness, we compared the area under the receiver operation characteristic curves between our network and other three approaches: 0.916 (95% confidence interval, 0.855, 0.977) with our optical coherence tomography Net; 0.841 (95% confidence interval, 0.749, 0.933) with Clock sectors division; 0.862 (95% confidence interval, 0.757, 0.968) with inferior, superior, nasal temporal sectors division and 0.886 (95% confidence interval, 0.815, 0.957) with optic disc sectors division. For only using fundus images, we compared the area under the receiver operation characteristic curves between our network and other two approaches: 0.867 (95% confidence interval: 0.781-0.952) with our Image Net; 0.774 (95% confidence interval: 0.670, 0.878) with ResNet50; 0.747 (95% confidence interval: 0.628, 0.866) with VGG16. CONCLUSION Our dual-modal fusion network utilizing both retinal nerve fiber layer thickness from optical coherence tomography reports and fundus images can diagnose glaucoma with a much better performance than the current approaches based on optical coherence tomography only or fundus images only.
Collapse
Affiliation(s)
- Yongli Xu
- College of Mathematics and Physics, Beijing University of Chemical Technology, Beijing, China
- College of Statistics and Data Science, Faculty of Science, Beijing University of Technology, Beijing, China
| | - Run Sun
- College of Mathematics and Physics, Beijing University of Chemical Technology, Beijing, China
| | - Man Hu
- Department of Ophthalmology, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, Beijing, China
| | - Hui Zeng
- College of Mathematics and Physics, Beijing University of Chemical Technology, Beijing, China
| |
Collapse
|
6
|
Wang S, He X, Jian Z, Li J, Xu C, Chen Y, Liu Y, Chen H, Huang C, Hu J, Liu Z. Advances and prospects of multi-modal ophthalmic artificial intelligence based on deep learning: a review. EYE AND VISION (LONDON, ENGLAND) 2024; 11:38. [PMID: 39350240 PMCID: PMC11443922 DOI: 10.1186/s40662-024-00405-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Accepted: 09/02/2024] [Indexed: 10/04/2024]
Abstract
BACKGROUND In recent years, ophthalmology has emerged as a new frontier in medical artificial intelligence (AI) with multi-modal AI in ophthalmology garnering significant attention across interdisciplinary research. This integration of various types and data models holds paramount importance as it enables the provision of detailed and precise information for diagnosing eye and vision diseases. By leveraging multi-modal ophthalmology AI techniques, clinicians can enhance the accuracy and efficiency of diagnoses, and thus reduce the risks associated with misdiagnosis and oversight while also enabling more precise management of eye and vision health. However, the widespread adoption of multi-modal ophthalmology poses significant challenges. MAIN TEXT In this review, we first summarize comprehensively the concept of modalities in the field of ophthalmology, the forms of fusion between modalities, and the progress of multi-modal ophthalmic AI technology. Finally, we discuss the challenges of current multi-modal AI technology applications in ophthalmology and future feasible research directions. CONCLUSION In the field of ophthalmic AI, evidence suggests that when utilizing multi-modal data, deep learning-based multi-modal AI technology exhibits excellent diagnostic efficacy in assisting the diagnosis of various ophthalmic diseases. Particularly, in the current era marked by the proliferation of large-scale models, multi-modal techniques represent the most promising and advantageous solution for addressing the diagnosis of various ophthalmic diseases from a comprehensive perspective. However, it must be acknowledged that there are still numerous challenges associated with the application of multi-modal techniques in ophthalmic AI before they can be effectively employed in the clinical setting.
Collapse
Affiliation(s)
- Shaopan Wang
- Institute of Artificial Intelligence, Xiamen University, Xiamen, Fujian, China
- School of Informatics, Xiamen University, Xiamen, Fujian, China
- Xiamen University Affiliated Xiamen Eye Center, Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Fujian Engineering and Research Center of Eye Regenerative Medicine, Eye Institute of Xiamen University, School of Medicine, Xiamen University, Chengyi Building, 4Th Floor, 4221-122, South Xiang'an Rd, Xiamen, 361005, Fujian, China
| | - Xin He
- Xiamen University Affiliated Xiamen Eye Center, Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Fujian Engineering and Research Center of Eye Regenerative Medicine, Eye Institute of Xiamen University, School of Medicine, Xiamen University, Chengyi Building, 4Th Floor, 4221-122, South Xiang'an Rd, Xiamen, 361005, Fujian, China
- Department of Ophthalmology, the First Affiliated Hospital of Xiamen University, Xiamen University, Xiamen, Fujian, China
| | - Zhongquan Jian
- Institute of Artificial Intelligence, Xiamen University, Xiamen, Fujian, China
- School of Informatics, Xiamen University, Xiamen, Fujian, China
| | - Jie Li
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Changsheng Xu
- Institute of Artificial Intelligence, Xiamen University, Xiamen, Fujian, China
- School of Informatics, Xiamen University, Xiamen, Fujian, China
- Xiamen University Affiliated Xiamen Eye Center, Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Fujian Engineering and Research Center of Eye Regenerative Medicine, Eye Institute of Xiamen University, School of Medicine, Xiamen University, Chengyi Building, 4Th Floor, 4221-122, South Xiang'an Rd, Xiamen, 361005, Fujian, China
| | - Yuguang Chen
- Institute of Artificial Intelligence, Xiamen University, Xiamen, Fujian, China
- School of Informatics, Xiamen University, Xiamen, Fujian, China
- Xiamen University Affiliated Xiamen Eye Center, Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Fujian Engineering and Research Center of Eye Regenerative Medicine, Eye Institute of Xiamen University, School of Medicine, Xiamen University, Chengyi Building, 4Th Floor, 4221-122, South Xiang'an Rd, Xiamen, 361005, Fujian, China
| | - Yuwen Liu
- Xiamen University Affiliated Xiamen Eye Center, Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Fujian Engineering and Research Center of Eye Regenerative Medicine, Eye Institute of Xiamen University, School of Medicine, Xiamen University, Chengyi Building, 4Th Floor, 4221-122, South Xiang'an Rd, Xiamen, 361005, Fujian, China
| | - Han Chen
- Department of Ophthalmology, the First Affiliated Hospital of Xiamen University, Xiamen University, Xiamen, Fujian, China
| | - Caihong Huang
- Xiamen University Affiliated Xiamen Eye Center, Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Fujian Engineering and Research Center of Eye Regenerative Medicine, Eye Institute of Xiamen University, School of Medicine, Xiamen University, Chengyi Building, 4Th Floor, 4221-122, South Xiang'an Rd, Xiamen, 361005, Fujian, China
| | - Jiaoyue Hu
- Xiamen University Affiliated Xiamen Eye Center, Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Fujian Engineering and Research Center of Eye Regenerative Medicine, Eye Institute of Xiamen University, School of Medicine, Xiamen University, Chengyi Building, 4Th Floor, 4221-122, South Xiang'an Rd, Xiamen, 361005, Fujian, China.
- Department of Ophthalmology, Xiang'an Hospital of Xiamen University, Xiamen, Fujian, China.
| | - Zuguo Liu
- Institute of Artificial Intelligence, Xiamen University, Xiamen, Fujian, China.
- Xiamen University Affiliated Xiamen Eye Center, Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Fujian Engineering and Research Center of Eye Regenerative Medicine, Eye Institute of Xiamen University, School of Medicine, Xiamen University, Chengyi Building, 4Th Floor, 4221-122, South Xiang'an Rd, Xiamen, 361005, Fujian, China.
- Department of Ophthalmology, Xiang'an Hospital of Xiamen University, Xiamen, Fujian, China.
| |
Collapse
|
7
|
Ghenciu LA, Dima M, Stoicescu ER, Iacob R, Boru C, Hațegan OA. Retinal Imaging-Based Oculomics: Artificial Intelligence as a Tool in the Diagnosis of Cardiovascular and Metabolic Diseases. Biomedicines 2024; 12:2150. [PMID: 39335664 PMCID: PMC11430496 DOI: 10.3390/biomedicines12092150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2024] [Revised: 09/19/2024] [Accepted: 09/21/2024] [Indexed: 09/30/2024] Open
Abstract
Cardiovascular diseases (CVDs) are a major cause of mortality globally, emphasizing the need for early detection and effective risk assessment to improve patient outcomes. Advances in oculomics, which utilize the relationship between retinal microvascular changes and systemic vascular health, offer a promising non-invasive approach to assessing CVD risk. Retinal fundus imaging and optical coherence tomography/angiography (OCT/OCTA) provides critical information for early diagnosis, with retinal vascular parameters such as vessel caliber, tortuosity, and branching patterns identified as key biomarkers. Given the large volume of data generated during routine eye exams, there is a growing need for automated tools to aid in diagnosis and risk prediction. The study demonstrates that AI-driven analysis of retinal images can accurately predict cardiovascular risk factors, cardiovascular events, and metabolic diseases, surpassing traditional diagnostic methods in some cases. These models achieved area under the curve (AUC) values ranging from 0.71 to 0.87, sensitivity between 71% and 89%, and specificity between 40% and 70%, surpassing traditional diagnostic methods in some cases. This approach highlights the potential of retinal imaging as a key component in personalized medicine, enabling more precise risk assessment and earlier intervention. It not only aids in detecting vascular abnormalities that may precede cardiovascular events but also offers a scalable, non-invasive, and cost-effective solution for widespread screening. However, the article also emphasizes the need for further research to standardize imaging protocols and validate the clinical utility of these biomarkers across different populations. By integrating oculomics into routine clinical practice, healthcare providers could significantly enhance early detection and management of systemic diseases, ultimately improving patient outcomes. Fundus image analysis thus represents a valuable tool in the future of precision medicine and cardiovascular health management.
Collapse
Affiliation(s)
- Laura Andreea Ghenciu
- Department of Functional Sciences, 'Victor Babes' University of Medicine and Pharmacy Timisoara, Eftimie Murgu Square No. 2, 300041 Timisoara, Romania
- Center for Translational Research and Systems Medicine, 'Victor Babes' University of Medicine and Pharmacy Timisoara, Eftimie Murgu Square No. 2, 300041 Timisoara, Romania
| | - Mirabela Dima
- Department of Neonatology, 'Victor Babes' University of Medicine and Pharmacy Timisoara, Eftimie Murgu Square No. 2, 300041 Timisoara, Romania
| | - Emil Robert Stoicescu
- Field of Applied Engineering Sciences, Specialization Statistical Methods and Techniques in Health and Clinical Research, Faculty of Mechanics, 'Politehnica' University Timisoara, Mihai Viteazul Boulevard No. 1, 300222 Timisoara, Romania
- Department of Radiology and Medical Imaging, 'Victor Babes' University of Medicine and Pharmacy Timisoara, Eftimie Murgu Square No. 2, 300041 Timisoara, Romania
- Research Center for Pharmaco-Toxicological Evaluations, 'Victor Babes' University of Medicine and Pharmacy Timisoara, Eftimie Murgu Square No. 2, 300041 Timisoara, Romania
| | - Roxana Iacob
- Field of Applied Engineering Sciences, Specialization Statistical Methods and Techniques in Health and Clinical Research, Faculty of Mechanics, 'Politehnica' University Timisoara, Mihai Viteazul Boulevard No. 1, 300222 Timisoara, Romania
- Doctoral School, "Victor Babes" University of Medicine and Pharmacy Timisoara, Eftimie Murgu Square 2, 300041 Timisoara, Romania
- Department of Anatomy and Embriology, 'Victor Babes' University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania
| | - Casiana Boru
- Discipline of Anatomy and Embriology, Medicine Faculty, "Vasile Goldis" Western University of Arad, Revolution Boulevard 94, 310025 Arad, Romania
| | - Ovidiu Alin Hațegan
- Discipline of Anatomy and Embriology, Medicine Faculty, "Vasile Goldis" Western University of Arad, Revolution Boulevard 94, 310025 Arad, Romania
| |
Collapse
|
8
|
Martin E, Cook AG, Frost SM, Turner AW, Chen FK, McAllister IL, Nolde JM, Schlaich MP. Ocular biomarkers: useful incidental findings by deep learning algorithms in fundus photographs. Eye (Lond) 2024; 38:2581-2588. [PMID: 38734746 PMCID: PMC11385472 DOI: 10.1038/s41433-024-03085-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Revised: 04/03/2024] [Accepted: 04/11/2024] [Indexed: 05/13/2024] Open
Abstract
BACKGROUND/OBJECTIVES Artificial intelligence can assist with ocular image analysis for screening and diagnosis, but it is not yet capable of autonomous full-spectrum screening. Hypothetically, false-positive results may have unrealized screening potential arising from signals persisting despite training and/or ambiguous signals such as from biomarker overlap or high comorbidity. The study aimed to explore the potential to detect clinically useful incidental ocular biomarkers by screening fundus photographs of hypertensive adults using diabetic deep learning algorithms. SUBJECTS/METHODS Patients referred for treatment-resistant hypertension were imaged at a hospital unit in Perth, Australia, between 2016 and 2022. The same 45° colour fundus photograph selected for each of the 433 participants imaged was processed by three deep learning algorithms. Two expert retinal specialists graded all false-positive results for diabetic retinopathy in non-diabetic participants. RESULTS Of the 29 non-diabetic participants misclassified as positive for diabetic retinopathy, 28 (97%) had clinically useful retinal biomarkers. The models designed to screen for fewer diseases captured more incidental disease. All three algorithms showed a positive correlation between severity of hypertensive retinopathy and misclassified diabetic retinopathy. CONCLUSIONS The results suggest that diabetic deep learning models may be responsive to hypertensive and other clinically useful retinal biomarkers within an at-risk, hypertensive cohort. Observing that models trained for fewer diseases captured more incidental pathology increases confidence in signalling hypotheses aligned with using self-supervised learning to develop autonomous comprehensive screening. Meanwhile, non-referable and false-positive outputs of other deep learning screening models could be explored for immediate clinical use in other populations.
Collapse
Affiliation(s)
- Eve Martin
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Kensington, WA, Australia.
- School of Population and Global Health, The University of Western Australia, Crawley, Australia.
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia.
- Australian e-Health Research Centre, Floreat, WA, Australia.
| | - Angus G Cook
- School of Population and Global Health, The University of Western Australia, Crawley, Australia
| | - Shaun M Frost
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Kensington, WA, Australia
- Australian e-Health Research Centre, Floreat, WA, Australia
| | - Angus W Turner
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
| | - Fred K Chen
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
- Centre for Eye Research Australia, The Royal Victorian Eye and Ear Hospital, East Melbourne, VIC, Australia
- Ophthalmology, Department of Surgery, The University of Melbourne, East Melbourne, VIC, Australia
- Ophthalmology Department, Royal Perth Hospital, Perth, Australia
| | - Ian L McAllister
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
| | - Janis M Nolde
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia
- Departments of Cardiology and Nephrology, Royal Perth Hospital, Perth, Australia
| | - Markus P Schlaich
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia
- Departments of Cardiology and Nephrology, Royal Perth Hospital, Perth, Australia
| |
Collapse
|
9
|
Xu Y, Liu H, Sun R, Wang H, Huo Y, Wang N, Hu M. Deep learning for predicting circular retinal nerve fiber layer thickness from fundus photographs and diagnosing glaucoma. Heliyon 2024; 10:e33813. [PMID: 39040392 PMCID: PMC11261845 DOI: 10.1016/j.heliyon.2024.e33813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 06/27/2024] [Accepted: 06/27/2024] [Indexed: 07/24/2024] Open
Abstract
Purpose This study aimed to propose a new deep learning (DL) approach to automatically predict the retinal nerve fiber layer thickness (RNFLT) around optic disc regions in fundus photography trained by optical coherence tomography (OCT) and diagnose glaucoma based on the predicted comprehensive information about RNFLT. Methods A total of 1403 pairs of fundus photographs and OCT RNFLT scans from 1403 eyes of 1196 participants were included. A residual deep neural network was trained to predict the RNFLT for each local image in a fundus photograph, and then a RNFLT report was generated based on the local images. Two indicators were designed based on the generated report. The support vector machines (SVM) algorithm was used to diagnose glaucoma based on the two indicators. Results A strong correlation was found between the predicted and actual RNFLT values on local images. On three testing datasets, we found the Pearson r to be 0.893, 0.850, and 0.831, respectively, and the mean absolute error of the prediction to be 14.345, 17.780, and 19.250 μm, respectively. The area under the receiver operating characteristic curves for discriminating glaucomatous from healthy eyes was 0.860 (95 % confidence interval, 0.799-0.921). Conclusions We established a novel local image-based DL approach to provide comprehensive quantitative information on RNFLT in fundus photographs, which was used to diagnose glaucoma. In addition, training a deep neural network based on local images to predict objective detail information in fundus photographs provided a new paradigm for the diagnosis of ophthalmic diseases.
Collapse
Affiliation(s)
- Yongli Xu
- College of Statistics and Data Science, Beijing University of Technology, Beijing, China
- Department of Mathematics, Beijing University of Chemical Technology, Beijing, China
| | - Hanruo Liu
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology & Visual Science Key Lab, Beijing, China
- School of Information and Electronics, Beijing Institute of Technology, Beijing, China
| | - Run Sun
- Department of Mathematics, Beijing University of Chemical Technology, Beijing, China
| | - Huaizhou Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology & Visual Science Key Lab, Beijing, China
| | - Yanjiao Huo
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology & Visual Science Key Lab, Beijing, China
| | - Ningli Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology & Visual Science Key Lab, Beijing, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University & Capital Medical University, Beijing Tongren Hospital, Beijing, China
| | - Man Hu
- Department of Ophthalmology, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, Beijing, China
| |
Collapse
|
10
|
Yu B, Kaku A, Liu K, Parnandi A, Fokas E, Venkatesan A, Pandit N, Ranganath R, Schambra H, Fernandez-Granda C. Quantifying impairment and disease severity using AI models trained on healthy subjects. NPJ Digit Med 2024; 7:180. [PMID: 38969786 PMCID: PMC11226623 DOI: 10.1038/s41746-024-01173-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 06/21/2024] [Indexed: 07/07/2024] Open
Abstract
Automatic assessment of impairment and disease severity is a key challenge in data-driven medicine. We propose a framework to address this challenge, which leverages AI models trained exclusively on healthy individuals. The COnfidence-Based chaRacterization of Anomalies (COBRA) score exploits the decrease in confidence of these models when presented with impaired or diseased patients to quantify their deviation from the healthy population. We applied the COBRA score to address a key limitation of current clinical evaluation of upper-body impairment in stroke patients. The gold-standard Fugl-Meyer Assessment (FMA) requires in-person administration by a trained assessor for 30-45 minutes, which restricts monitoring frequency and precludes physicians from adapting rehabilitation protocols to the progress of each patient. The COBRA score, computed automatically in under one minute, is shown to be strongly correlated with the FMA on an independent test cohort for two different data modalities: wearable sensors (ρ = 0.814, 95% CI [0.700,0.888]) and video (ρ = 0.736, 95% C.I [0.584, 0.838]). To demonstrate the generalizability of the approach to other conditions, the COBRA score was also applied to quantify severity of knee osteoarthritis from magnetic-resonance imaging scans, again achieving significant correlation with an independent clinical assessment (ρ = 0.644, 95% C.I [0.585,0.696]).
Collapse
Affiliation(s)
- Boyang Yu
- Center for Data Science, New York University, 60 Fifth Ave, New York, NY, 10011, USA
| | - Aakash Kaku
- Center for Data Science, New York University, 60 Fifth Ave, New York, NY, 10011, USA
| | - Kangning Liu
- Center for Data Science, New York University, 60 Fifth Ave, New York, NY, 10011, USA
| | - Avinash Parnandi
- Department of Neurology, NYU Grossman School of Medicine, 550 1st Ave, New York, NY, 10016, USA
- Department of Rehabilitation Medicine, NYU Grossman School of Medicine, 550 1st Ave, New York, NY, 10016, USA
| | - Emily Fokas
- Department of Neurology, NYU Grossman School of Medicine, 550 1st Ave, New York, NY, 10016, USA
| | - Anita Venkatesan
- Department of Neurology, NYU Grossman School of Medicine, 550 1st Ave, New York, NY, 10016, USA
| | - Natasha Pandit
- Department of Rehabilitation Medicine, NYU Grossman School of Medicine, 550 1st Ave, New York, NY, 10016, USA
| | - Rajesh Ranganath
- Center for Data Science, New York University, 60 Fifth Ave, New York, NY, 10011, USA
- Courant Institute of Mathematical Sciences, New York University, 251 Mercer St, New York, NY, 10012, USA
| | - Heidi Schambra
- Department of Neurology, NYU Grossman School of Medicine, 550 1st Ave, New York, NY, 10016, USA.
- Department of Rehabilitation Medicine, NYU Grossman School of Medicine, 550 1st Ave, New York, NY, 10016, USA.
| | - Carlos Fernandez-Granda
- Center for Data Science, New York University, 60 Fifth Ave, New York, NY, 10011, USA.
- Courant Institute of Mathematical Sciences, New York University, 251 Mercer St, New York, NY, 10012, USA.
| |
Collapse
|
11
|
Sinha S, Ramesh PV, Nishant P, Morya AK, Prasad R. Novel automated non-invasive detection of ocular surface squamous neoplasia using artificial intelligence. World J Methodol 2024; 14:92267. [PMID: 38983656 PMCID: PMC11229874 DOI: 10.5662/wjm.v14.i2.92267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/20/2024] [Revised: 02/19/2024] [Accepted: 04/12/2024] [Indexed: 06/13/2024] Open
Abstract
Ocular surface squamous neoplasia (OSSN) is a common eye surface tumour, characterized by the growth of abnormal cells on the ocular surface. OSSN includes invasive squamous cell carcinoma (SCC), in which tumour cells penetrate the basement membrane and infiltrate the stroma, as well as non-invasive conjunctival intraepithelial neoplasia, dysplasia, and SCC in-situ thereby presenting a challenge in early detection and diagnosis. Early identification and precise demarcation of the OSSN border leads to straightforward and curative treatments, such as topical medicines, whereas advanced invasive lesions may need orbital exenteration, which carries a risk of death. Artificial intelligence (AI) has emerged as a promising tool in the field of eye care and holds potential for its application in OSSN management. AI algorithms trained on large datasets can analyze ocular surface images to identify suspicious lesions associated with OSSN, aiding ophthalmologists in early detection and diagnosis. AI can also track and monitor lesion progression over time, providing objective measurements to guide treatment decisions. Furthermore, AI can assist in treatment planning by offering personalized recommendations based on patient data and predicting the treatment response. This manuscript highlights the role of AI in OSSN, specifically focusing on its contributions in early detection and diagnosis, assessment of lesion progression, treatment planning, telemedicine and remote monitoring, and research and data analysis.
Collapse
Affiliation(s)
- Sony Sinha
- Department of Ophthalmology–Vitreo Retina, Neuro Ophthalmology and Oculoplasty, All India Institute of Medical Sciences, Patna 801507, India
| | | | - Prateek Nishant
- Department of Ophthalmology, ESIC Medical College, Patna 801113, India
| | - Arvind Kumar Morya
- Department of Ophthalmology, All India Institute of Medical Sciences, Hyderabad 508126, India
| | - Ripunjay Prasad
- Department of Ophthalmology, RP Eye Institute, Delhi 110001, India
| |
Collapse
|
12
|
Patterson EJ, Bounds AD, Wagner SK, Kadri-Langford R, Taylor R, Daly D. Oculomics: A Crusade Against the Four Horsemen of Chronic Disease. Ophthalmol Ther 2024; 13:1427-1451. [PMID: 38630354 PMCID: PMC11109082 DOI: 10.1007/s40123-024-00942-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Accepted: 03/25/2024] [Indexed: 05/22/2024] Open
Abstract
Chronic, non-communicable diseases present a major barrier to living a long and healthy life. In many cases, early diagnosis can facilitate prevention, monitoring, and treatment efforts, improving patient outcomes. There is therefore a critical need to make screening techniques as accessible, unintimidating, and cost-effective as possible. The association between ocular biomarkers and systemic health and disease (oculomics) presents an attractive opportunity for detection of systemic diseases, as ophthalmic techniques are often relatively low-cost, fast, and non-invasive. In this review, we highlight the key associations between structural biomarkers in the eye and the four globally leading causes of morbidity and mortality: cardiovascular disease, cancer, neurodegenerative disease, and metabolic disease. We observe that neurodegenerative disease is a particularly promising target for oculomics, with biomarkers detected in multiple ocular structures. Cardiovascular disease biomarkers are present in the choroid, retinal vasculature, and retinal nerve fiber layer, and metabolic disease biomarkers are present in the eyelid, tear fluid, lens, and retinal vasculature. In contrast, only the tear fluid emerged as a promising ocular target for the detection of cancer. The retina is a rich source of oculomics data, the analysis of which has been enhanced by artificial intelligence-based tools. Although not all biomarkers are disease-specific, limiting their current diagnostic utility, future oculomics research will likely benefit from combining data from various structures to improve specificity, as well as active design, development, and optimization of instruments that target specific disease signatures, thus facilitating differential diagnoses.
Collapse
Affiliation(s)
| | | | - Siegfried K Wagner
- Moorfields Eye Hospital NHS Trust, 162 City Road, London, EC1V 2PD, UK
- UCL Institute of Ophthalmology, University College London, 11-43 Bath Street, London, EC1V 9EL, UK
| | | | - Robin Taylor
- Occuity, The Blade, Abbey Square, Reading, Berkshire, RG1 3BE, UK
| | - Dan Daly
- Occuity, The Blade, Abbey Square, Reading, Berkshire, RG1 3BE, UK
| |
Collapse
|
13
|
Lang O, Yaya-Stupp D, Traynis I, Cole-Lewis H, Bennett CR, Lyles CR, Lau C, Irani M, Semturs C, Webster DR, Corrado GS, Hassidim A, Matias Y, Liu Y, Hammel N, Babenko B. Using generative AI to investigate medical imagery models and datasets. EBioMedicine 2024; 102:105075. [PMID: 38565004 PMCID: PMC10993140 DOI: 10.1016/j.ebiom.2024.105075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 03/05/2024] [Accepted: 03/06/2024] [Indexed: 04/04/2024] Open
Abstract
BACKGROUND AI models have shown promise in performing many medical imaging tasks. However, our ability to explain what signals these models have learned is severely lacking. Explanations are needed in order to increase the trust of doctors in AI-based models, especially in domains where AI prediction capabilities surpass those of humans. Moreover, such explanations could enable novel scientific discovery by uncovering signals in the data that aren't yet known to experts. METHODS In this paper, we present a workflow for generating hypotheses to understand which visual signals in images are correlated with a classification model's predictions for a given task. This approach leverages an automatic visual explanation algorithm followed by interdisciplinary expert review. We propose the following 4 steps: (i) Train a classifier to perform a given task to assess whether the imagery indeed contains signals relevant to the task; (ii) Train a StyleGAN-based image generator with an architecture that enables guidance by the classifier ("StylEx"); (iii) Automatically detect, extract, and visualize the top visual attributes that the classifier is sensitive towards. For visualization, we independently modify each of these attributes to generate counterfactual visualizations for a set of images (i.e., what the image would look like with the attribute increased or decreased); (iv) Formulate hypotheses for the underlying mechanisms, to stimulate future research. Specifically, present the discovered attributes and corresponding counterfactual visualizations to an interdisciplinary panel of experts so that hypotheses can account for social and structural determinants of health (e.g., whether the attributes correspond to known patho-physiological or socio-cultural phenomena, or could be novel discoveries). FINDINGS To demonstrate the broad applicability of our approach, we present results on eight prediction tasks across three medical imaging modalities-retinal fundus photographs, external eye photographs, and chest radiographs. We showcase examples where many of the automatically-learned attributes clearly capture clinically known features (e.g., types of cataract, enlarged heart), and demonstrate automatically-learned confounders that arise from factors beyond physiological mechanisms (e.g., chest X-ray underexposure is correlated with the classifier predicting abnormality, and eye makeup is correlated with the classifier predicting low hemoglobin levels). We further show that our method reveals a number of physiologically plausible, previously-unknown attributes based on the literature (e.g., differences in the fundus associated with self-reported sex, which were previously unknown). INTERPRETATION Our approach enables hypotheses generation via attribute visualizations and has the potential to enable researchers to better understand, improve their assessment, and extract new knowledge from AI-based models, as well as debug and design better datasets. Though not designed to infer causality, importantly, we highlight that attributes generated by our framework can capture phenomena beyond physiology or pathophysiology, reflecting the real world nature of healthcare delivery and socio-cultural factors, and hence interdisciplinary perspectives are critical in these investigations. Finally, we will release code to help researchers train their own StylEx models and analyze their predictive tasks of interest, and use the methodology presented in this paper for responsible interpretation of the revealed attributes. FUNDING Google.
Collapse
Affiliation(s)
| | | | - Ilana Traynis
- Work Done at Google Via Advanced Clinical, Deerfield, IL, USA
| | | | | | - Courtney R Lyles
- Google, Mountain View, CA, USA; University of California San Francisco, Department of Medicine, San Francisco, CA, USA
| | | | | | | | | | | | | | | | - Yun Liu
- Google, Mountain View, CA, USA
| | | | | |
Collapse
|
14
|
Choi JY, Kim H, Kim JK, Lee IS, Ryu IH, Kim JS, Yoo TK. Deep learning prediction of steep and flat corneal curvature using fundus photography in post-COVID telemedicine era. Med Biol Eng Comput 2024; 62:449-463. [PMID: 37889431 DOI: 10.1007/s11517-023-02952-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 10/14/2023] [Indexed: 10/28/2023]
Abstract
Recently, fundus photography (FP) is being increasingly used. Corneal curvature is an essential factor in refractive errors and is associated with several pathological corneal conditions. As FP-based examination systems have already been widely distributed, it would be helpful for telemedicine to extract information such as corneal curvature using FP. This study aims to develop a deep learning model based on FP for corneal curvature prediction by categorizing corneas into steep, regular, and flat groups. The EfficientNetB0 architecture with transfer learning was used to learn FP patterns to predict flat, regular, and steep corneas. In validation, the model achieved a multiclass accuracy of 0.727, a Matthews correlation coefficient of 0.519, and an unweighted Cohen's κ of 0.590. The areas under the receiver operating characteristic curves for binary prediction of flat and steep corneas were 0.863 and 0.848, respectively. The optic nerve and its peripheral areas were the main focus of the model. The developed algorithm shows that FP can potentially be used as an imaging modality to estimate corneal curvature in the post-COVID-19 era, whereby patients may benefit from the detection of abnormal corneal curvatures using FP in the telemedicine setting.
Collapse
Affiliation(s)
- Joon Yul Choi
- Department of Biomedical Engineering, Yonsei University, Wonju, South Korea
| | | | - Jin Kuk Kim
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea
| | - In Sik Lee
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea
| | - Ik Hee Ryu
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea
- Research and Development Department, VISUWORKS, Seoul, South Korea
| | - Jung Soo Kim
- Research and Development Department, VISUWORKS, Seoul, South Korea
| | - Tae Keun Yoo
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea.
- Research and Development Department, VISUWORKS, Seoul, South Korea.
| |
Collapse
|
15
|
Pewton SW, Cassidy B, Kendrick C, Yap MH. Dermoscopic dark corner artifacts removal: Friend or foe? COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 244:107986. [PMID: 38157827 DOI: 10.1016/j.cmpb.2023.107986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 12/09/2023] [Accepted: 12/16/2023] [Indexed: 01/03/2024]
Abstract
BACKGROUND AND OBJECTIVES One of the more significant obstacles in classification of skin cancer is the presence of artifacts. This paper investigates the effect of dark corner artifacts, which result from the use of dermoscopes, on the performance of a deep learning binary classification task. Previous research attempted to remove and inpaint dark corner artifacts, with the intention of creating an ideal condition for models. However, such research has been shown to be inconclusive due to a lack of available datasets with corresponding labels for dark corner artifact cases. METHODS To address these issues, we label 10,250 skin lesion images from publicly available datasets and introduce a balanced dataset with an equal number of melanoma and non-melanoma cases. The training set comprises 6126 images without artifacts, and the testing set comprises 4124 images with dark corner artifacts. We conduct three experiments to provide new understanding on the effects of dark corner artifacts, including inpainted and synthetically generated examples, on a deep learning method. RESULTS Our results suggest that introducing synthetic dark corner artifacts which have been superimposed onto the training set improved model performance, particularly in terms of the true negative rate. This indicates that deep learning learnt to ignore dark corner artifacts, rather than treating it as melanoma, when dark corner artifacts were introduced into the training set. Further, we propose a new approach to quantifying heatmaps indicating network focus using a root mean square measure of the brightness intensity in the different regions of the heatmaps. CONCLUSIONS The proposed artifact methods can be used in future experiments to help alleviate possible impacts on model performance. Additionally, the newly proposed heatmap quantification analysis will help to better understand the relationships between heatmap results and other model performance metrics.
Collapse
Affiliation(s)
- Samuel William Pewton
- Department of Computing and Mathematics, Faculty of Science and Engineering, Manchester Metropolitan University, Chester Street, Manchester, M1 5GD, UK.
| | - Bill Cassidy
- Department of Computing and Mathematics, Faculty of Science and Engineering, Manchester Metropolitan University, Chester Street, Manchester, M1 5GD, UK.
| | - Connah Kendrick
- Department of Computing and Mathematics, Faculty of Science and Engineering, Manchester Metropolitan University, Chester Street, Manchester, M1 5GD, UK.
| | - Moi Hoon Yap
- Department of Computing and Mathematics, Faculty of Science and Engineering, Manchester Metropolitan University, Chester Street, Manchester, M1 5GD, UK.
| |
Collapse
|
16
|
Huang Y, Cheung CY, Li D, Tham YC, Sheng B, Cheng CY, Wang YX, Wong TY. AI-integrated ocular imaging for predicting cardiovascular disease: advancements and future outlook. Eye (Lond) 2024; 38:464-472. [PMID: 37709926 PMCID: PMC10858189 DOI: 10.1038/s41433-023-02724-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Revised: 07/26/2023] [Accepted: 08/25/2023] [Indexed: 09/16/2023] Open
Abstract
Cardiovascular disease (CVD) remains the leading cause of death worldwide. Assessing of CVD risk plays an essential role in identifying individuals at higher risk and enables the implementation of targeted intervention strategies, leading to improved CVD prevalence reduction and patient survival rates. The ocular vasculature, particularly the retinal vasculature, has emerged as a potential means for CVD risk stratification due to its anatomical similarities and physiological characteristics shared with other vital organs, such as the brain and heart. The integration of artificial intelligence (AI) into ocular imaging has the potential to overcome limitations associated with traditional semi-automated image analysis, including inefficiency and manual measurement errors. Furthermore, AI techniques may uncover novel and subtle features that contribute to the identification of ocular biomarkers associated with CVD. This review provides a comprehensive overview of advancements made in AI-based ocular image analysis for predicting CVD, including the prediction of CVD risk factors, the replacement of traditional CVD biomarkers (e.g., CT-scan measured coronary artery calcium score), and the prediction of symptomatic CVD events. The review covers a range of ocular imaging modalities, including colour fundus photography, optical coherence tomography, and optical coherence tomography angiography, and other types of images like external eye images. Additionally, the review addresses the current limitations of AI research in this field and discusses the challenges associated with translating AI algorithms into clinical practice.
Collapse
Affiliation(s)
- Yu Huang
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Dawei Li
- College of Future Technology, Peking University, Beijing, China
| | - Yih Chung Tham
- Centre for Innovation and Precision Eye Health and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| | - Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ching Yu Cheng
- Centre for Innovation and Precision Eye Health and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore.
- Tsinghua Medicine, Tsinghua University, Beijing, China.
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Beijing, China.
| |
Collapse
|
17
|
Hase T, Ghosh S, Aisaki KI, Kitajima S, Kanno J, Kitano H, Yachie A. DTox: A deep neural network-based in visio lens for large scale toxicogenomics data. J Toxicol Sci 2024; 49:105-115. [PMID: 38432953 DOI: 10.2131/jts.49.105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2024]
Abstract
With the advancement of large-scale omics technologies, particularly transcriptomics data sets on drug and treatment response repositories available in public domain, toxicogenomics has emerged as a key field in safety pharmacology and chemical risk assessment. Traditional statistics-based bioinformatics analysis poses challenges in its application across multidimensional toxicogenomic data, including administration time, dosage, and gene expression levels. Motivated by the visual inspection workflow of field experts to augment their efficiency of screening significant genes to derive meaningful insights, together with the ability of deep neural architectures to learn the image signals, we developed DTox, a deep neural network-based in visio approach. Using the Percellome toxicogenomics database, instead of utilizing the numerical gene expression values of the transcripts (gene probes of the microarray) for dose-time combinations, DTox learned the image representation of 3D surface plots of distinct time and dosage data points to train the classifier on the experts' labels of gene probe significance. DTox outperformed statistical threshold-based bioinformatics and machine learning approaches based on numerical expression values. This result shows the ability of image-driven neural networks to overcome the limitations of classical numeric value-based approaches. Further, by augmenting the model with explainability modules, our study showed the potential to reveal the visual analysis process of human experts in toxicogenomics through the model weights. While the current work demonstrates the application of the DTox model in toxicogenomic studies, it can be further generalized as an in visio approach for multi-dimensional numeric data with applications in various fields in medical data sciences.
Collapse
Affiliation(s)
- Takeshi Hase
- The Systems Biology Institute, Saisei Ikedayama Bldg
- SBX BioSciences, Inc, Canada
- Institute of Education, Tokyo Medical and Dental University
- Faculty of Pharmacy, Keio University
- Center for Mathematical Modelling and Data Science, Osaka University
| | - Samik Ghosh
- The Systems Biology Institute, Saisei Ikedayama Bldg
| | - Ken-Ichi Aisaki
- Division of Cellular and Molecular Toxicology, Center for Biological Safety and Research (CBSR), National Institute of Health Sciences (NIHS)
| | - Satoshi Kitajima
- Division of Cellular and Molecular Toxicology, Center for Biological Safety and Research (CBSR), National Institute of Health Sciences (NIHS)
| | - Jun Kanno
- The Systems Biology Institute, Saisei Ikedayama Bldg
- Division of Cellular and Molecular Toxicology, Center for Biological Safety and Research (CBSR), National Institute of Health Sciences (NIHS)
- Faculty of Medicine, University of Tsukuba
| | - Hiroaki Kitano
- The Systems Biology Institute, Saisei Ikedayama Bldg
- Integrated Open Systems Unit, Okinawa Institute of Science and Technology (OIST)
| | - Ayako Yachie
- The Systems Biology Institute, Saisei Ikedayama Bldg
- SBX BioSciences, Inc, Canada
| |
Collapse
|
18
|
Zhang JQ, Mi JJ, Wang R. Application of convolutional neural network-based endoscopic imaging in esophageal cancer or high-grade dysplasia: A systematic review and meta-analysis. World J Gastrointest Oncol 2023; 15:1998-2016. [DOI: 10.4251/wjgo.v15.i11.1998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 09/05/2023] [Accepted: 10/11/2023] [Indexed: 11/15/2023] Open
Abstract
BACKGROUND Esophageal cancer is the seventh-most common cancer type worldwide, accounting for 5% of death from malignancy. Development of novel diagnostic techniques has facilitated screening, early detection, and improved prognosis. Convolutional neural network (CNN)-based image analysis promises great potential for diagnosing and determining the prognosis of esophageal cancer, enabling even early detection of dysplasia.
AIM To conduct a meta-analysis of the diagnostic accuracy of CNN models for the diagnosis of esophageal cancer and high-grade dysplasia (HGD).
METHODS PubMed, EMBASE, Web of Science and Cochrane Library databases were searched for articles published up to November 30, 2022. We evaluated the diagnostic accuracy of using the CNN model with still image-based analysis and with video-based analysis for esophageal cancer or HGD, as well as for the invasion depth of esophageal cancer. The pooled sensitivity, pooled specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR), diagnostic odds ratio (DOR) and area under the curve (AUC) were estimated, together with the 95% confidence intervals (CI). A bivariate method and hierarchical summary receiver operating characteristic method were used to calculate the diagnostic test accuracy of the CNN model. Meta-regression and subgroup analyses were used to identify sources of heterogeneity.
RESULTS A total of 28 studies were included in this systematic review and meta-analysis. Using still image-based analysis for the diagnosis of esophageal cancer or HGD provided a pooled sensitivity of 0.95 (95%CI: 0.92-0.97), pooled specificity of 0.92 (0.89-0.94), PLR of 11.5 (8.3-16.0), NLR of 0.06 (0.04-0.09), DOR of 205 (115-365), and AUC of 0.98 (0.96-0.99). When video-based analysis was used, a pooled sensitivity of 0.85 (0.77-0.91), pooled specificity of 0.73 (0.59-0.83), PLR of 3.1 (1.9-5.0), NLR of 0.20 (0.12-0.34), DOR of 15 (6-38) and AUC of 0.87 (0.84-0.90) were found. Prediction of invasion depth resulted in a pooled sensitivity of 0.90 (0.87-0.92), pooled specificity of 0.83 (95%CI: 0.76-0.88), PLR of 7.8 (1.9-32.0), NLR of 0.10 (0.41-0.25), DOR of 118 (11-1305), and AUC of 0.95 (0.92-0.96).
CONCLUSION CNN-based image analysis in diagnosing esophageal cancer and HGD is an excellent diagnostic method with high sensitivity and specificity that merits further investigation in large, multicenter clinical trials.
Collapse
Affiliation(s)
- Jun-Qi Zhang
- The Fifth Clinical Medical College, Shanxi Medical University, Taiyuan 030001, Shanxi Province, China
| | - Jun-Jie Mi
- Department of Gastroenterology, Shanxi Provincial People’s Hospital, Taiyuan 030012, Shanxi Province, China
| | - Rong Wang
- Department of Gastroenterology, The Fifth Hospital of Shanxi Medical University (Shanxi Provincial People’s Hospital), Taiyuan 030012, Shanxi Province, China
| |
Collapse
|
19
|
Ueki Y, Toyota K, Ohira T, Takeuchi K, Satake SI. Gender identification of the horsehair crab, Erimacrus isenbeckii (Brandt, 1848), by image recognition with a deep neural network. Sci Rep 2023; 13:19190. [PMID: 37957197 PMCID: PMC10643619 DOI: 10.1038/s41598-023-46606-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 11/02/2023] [Indexed: 11/15/2023] Open
Abstract
Appearance-based gender identification of the horsehair crab [Erimacrus isenbeckii (Brandt, 1848)] is important for preventing indiscriminate fishing of female crabs. Although their gender is easily identified by visual observation of their abdomen because of a difference in the forms of their sex organs, most of the crabs settle with their shell side upward when placed on a floor, making visual gender identification difficult. Our objective is to use deep learning to identify the gender of the horsehair crab on the basis of images of their shell and abdomen sides. Deep learning was applied to a photograph of 60 males and 60 females captured in Funka Bay, Southern Hokkaido, Japan. The deep learning algorithms used the AlexNet, VGG-16, and ResNet-50 convolutional neural networks. The VGG-16 network achieved high accuracy. Heatmaps were enhanced near the forms of the sex organs in the abdomen side (F-1 measure: 98%). The bottom of the shell was enhanced in the heatmap of a male; by contrast, the upper part of the shell was enhanced in the heatmap of a female (F-1 measure: 95%). The image recognition of the shell side based on a deep learning algorithm enabled more precise gender identification than could be achieved by human-eye inspection.
Collapse
Affiliation(s)
- Yoshitaka Ueki
- Department of Applied Electronics, Faculty of Advanced Engineering, Tokyo University of Science, 6‑3‑1 Niijuku, Katsushika‑ku, Tokyo, 125‑8585, Japan
| | - Kenji Toyota
- Noto Marine Laboratory, Institute of Nature and Environmental Technology, Kanazawa University, Ogi, Noto‑cho, Ishikawa, 927‑0553, Japan
- Department of Biological Science and Technology, Faculty of Advanced Engineering, Tokyo University of Science, 6‑3‑1 Niijuku, Katsushika‑ku, Tokyo, 125‑8585, Japan
- Department of Science, Faculty of Science, Kanagawa University, 3-27-1 Rokkakubashi, Kanagawa-ku, Yokohama-shi, Kanagawa, 221‑8686, Japan
| | - Tsuyoshi Ohira
- Department of Science, Faculty of Science, Kanagawa University, 3-27-1 Rokkakubashi, Kanagawa-ku, Yokohama-shi, Kanagawa, 221‑8686, Japan
| | - Ken Takeuchi
- Oshamambe Division, Institute of Arts and Sciences, Tokyo University of Science, 102-1 Tomino, Oshamambe-cho, Yamakoshi-gun, Hokkaido, 049-3514, Japan
| | - Shin-Ichi Satake
- Department of Applied Electronics, Faculty of Advanced Engineering, Tokyo University of Science, 6‑3‑1 Niijuku, Katsushika‑ku, Tokyo, 125‑8585, Japan.
| |
Collapse
|
20
|
Alsayat A, Elmezain M, Alanazi S, Alruily M, Mostafa AM, Said W. Multi-Layer Preprocessing and U-Net with Residual Attention Block for Retinal Blood Vessel Segmentation. Diagnostics (Basel) 2023; 13:3364. [PMID: 37958260 PMCID: PMC10648654 DOI: 10.3390/diagnostics13213364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 10/21/2023] [Accepted: 10/30/2023] [Indexed: 11/15/2023] Open
Abstract
Retinal blood vessel segmentation is a valuable tool for clinicians to diagnose conditions such as atherosclerosis, glaucoma, and age-related macular degeneration. This paper presents a new framework for segmenting blood vessels in retinal images. The framework has two stages: a multi-layer preprocessing stage and a subsequent segmentation stage employing a U-Net with a multi-residual attention block. The multi-layer preprocessing stage has three steps. The first step is noise reduction, employing a U-shaped convolutional neural network with matrix factorization (CNN with MF) and detailed U-shaped U-Net (D_U-Net) to minimize image noise, culminating in the selection of the most suitable image based on the PSNR and SSIM values. The second step is dynamic data imputation, utilizing multiple models for the purpose of filling in missing data. The third step is data augmentation through the utilization of a latent diffusion model (LDM) to expand the training dataset size. The second stage of the framework is segmentation, where the U-Nets with a multi-residual attention block are used to segment the retinal images after they have been preprocessed and noise has been removed. The experiments show that the framework is effective at segmenting retinal blood vessels. It achieved Dice scores of 95.32, accuracy of 93.56, precision of 95.68, and recall of 95.45. It also achieved efficient results in removing noise using CNN with matrix factorization (MF) and D-U-NET according to values of PSNR and SSIM for (0.1, 0.25, 0.5, and 0.75) levels of noise. The LDM achieved an inception score of 13.6 and an FID of 46.2 in the augmentation step.
Collapse
Affiliation(s)
- Ahmed Alsayat
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Saudi Arabia; (S.A.); (M.A.)
| | - Mahmoud Elmezain
- Computer Science Division, Faculty of Science, Tanta University, Tanta 31527, Egypt;
- Computer Science Department, College of Computer Science and Engineering, Taibah University, Yanbu 966144, Saudi Arabia
| | - Saad Alanazi
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Saudi Arabia; (S.A.); (M.A.)
| | - Meshrif Alruily
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Saudi Arabia; (S.A.); (M.A.)
| | - Ayman Mohamed Mostafa
- Information Systems Department, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Saudi Arabia
| | - Wael Said
- Computer Science Department, Faculty of Computers and Informatics, Zagazig University, Zagazig 44511, Egypt;
- Computer Science Department, College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
| |
Collapse
|
21
|
Zhang S, Echegoyen J. Design and Usability Study of a Point of Care mHealth App for Early Dry Eye Screening and Detection. J Clin Med 2023; 12:6479. [PMID: 37892616 PMCID: PMC10607458 DOI: 10.3390/jcm12206479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 09/27/2023] [Accepted: 10/10/2023] [Indexed: 10/29/2023] Open
Abstract
Significantly increased eye blink rate and partial blinks have been well documented in patients with dry eye disease (DED), a multifactorial eye disorder with few effective methods for clinical diagnosis. In this study, a point of care mHealth App named "EyeScore" was developed, utilizing blink rate and patterns as early clinical biomarkers for DED. EyeScore utilizes an iPhone for a 1-min in-app recording of eyelid movements. The use of facial landmarks, eye aspect ratio (EAR) and derivatives enabled a comprehensive analysis of video frames for the determination of eye blink rate and partial blink counts. Smartphone videos from ten DED patients and ten non-DED controls were analyzed to optimize EAR-based thresholds, with eye blink and partial blink results in excellent agreement with manual counts. Importantly, a clinically relevant algorithm for the calculation of "eye healthiness score" was created, which took into consideration eye blink rate, partial blink counts as well as other demographic and clinical risk factors for DED. This 10-point score can be conveniently measured anytime with non-invasive manners and successfully led to the identification of three individuals with DED conditions from ten non-DED controls. Thus, EyeScore can be validated as a valuable mHealth App for early DED screening, detection and treatment monitoring.
Collapse
Affiliation(s)
- Sydney Zhang
- Department of Clinical Research, Westview Eye Institute, San Diego, CA 92129, USA;
| | | |
Collapse
|
22
|
Chen RJ, Wang JJ, Williamson DFK, Chen TY, Lipkova J, Lu MY, Sahai S, Mahmood F. Algorithmic fairness in artificial intelligence for medicine and healthcare. Nat Biomed Eng 2023; 7:719-742. [PMID: 37380750 PMCID: PMC10632090 DOI: 10.1038/s41551-023-01056-8] [Citation(s) in RCA: 116] [Impact Index Per Article: 58.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 04/13/2023] [Indexed: 06/30/2023]
Abstract
In healthcare, the development and deployment of insufficiently fair systems of artificial intelligence (AI) can undermine the delivery of equitable care. Assessments of AI models stratified across subpopulations have revealed inequalities in how patients are diagnosed, treated and billed. In this Perspective, we outline fairness in machine learning through the lens of healthcare, and discuss how algorithmic biases (in data acquisition, genetic variation and intra-observer labelling variability, in particular) arise in clinical workflows and the resulting healthcare disparities. We also review emerging technology for mitigating biases via disentanglement, federated learning and model explainability, and their role in the development of AI-based software as a medical device.
Collapse
Affiliation(s)
- Richard J Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Judy J Wang
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Boston University School of Medicine, Boston, MA, USA
| | - Drew F K Williamson
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Tiffany Y Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Jana Lipkova
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Ming Y Lu
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Sharifa Sahai
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Systems Biology, Harvard Medical School, Boston, MA, USA
| | - Faisal Mahmood
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA.
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA.
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
- Harvard Data Science Initiative, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
23
|
Babenko B, Traynis I, Chen C, Singh P, Uddin A, Cuadros J, Daskivich LP, Maa AY, Kim R, Kang EYC, Matias Y, Corrado GS, Peng L, Webster DR, Semturs C, Krause J, Varadarajan AV, Hammel N, Liu Y. A deep learning model for novel systemic biomarkers in photographs of the external eye: a retrospective study. Lancet Digit Health 2023; 5:e257-e264. [PMID: 36966118 PMCID: PMC11818944 DOI: 10.1016/s2589-7500(23)00022-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Revised: 01/13/2023] [Accepted: 01/31/2023] [Indexed: 03/27/2023]
Abstract
BACKGROUND Photographs of the external eye were recently shown to reveal signs of diabetic retinal disease and elevated glycated haemoglobin. This study aimed to test the hypothesis that external eye photographs contain information about additional systemic medical conditions. METHODS We developed a deep learning system (DLS) that takes external eye photographs as input and predicts systemic parameters, such as those related to the liver (albumin, aspartate aminotransferase [AST]); kidney (estimated glomerular filtration rate [eGFR], urine albumin-to-creatinine ratio [ACR]); bone or mineral (calcium); thyroid (thyroid stimulating hormone); and blood (haemoglobin, white blood cells [WBC], platelets). This DLS was trained using 123 130 images from 38 398 patients with diabetes undergoing diabetic eye screening in 11 sites across Los Angeles county, CA, USA. Evaluation focused on nine prespecified systemic parameters and leveraged three validation sets (A, B, C) spanning 25 510 patients with and without diabetes undergoing eye screening in three independent sites in Los Angeles county, CA, and the greater Atlanta area, GA, USA. We compared performance against baseline models incorporating available clinicodemographic variables (eg, age, sex, race and ethnicity, years with diabetes). FINDINGS Relative to the baseline, the DLS achieved statistically significant superior performance at detecting AST >36·0 U/L, calcium <8·6 mg/dL, eGFR <60·0 mL/min/1·73 m2, haemoglobin <11·0 g/dL, platelets <150·0 × 103/μL, ACR ≥300 mg/g, and WBC <4·0 × 103/μL on validation set A (a population resembling the development datasets), with the area under the receiver operating characteristic curve (AUC) of the DLS exceeding that of the baseline by 5·3-19·9% (absolute differences in AUC). On validation sets B and C, with substantial patient population differences compared with the development datasets, the DLS outperformed the baseline for ACR ≥300·0 mg/g and haemoglobin <11·0 g/dL by 7·3-13·2%. INTERPRETATION We found further evidence that external eye photographs contain biomarkers spanning multiple organ systems. Such biomarkers could enable accessible and non-invasive screening of disease. Further work is needed to understand the translational implications. FUNDING Google.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Lauren P Daskivich
- Ophthalmic Services and Eye Health Programs, Los Angeles County Department of Health Services, Los Angeles, CA, USA; Department of Ophthalmology, University of Southern California Keck School of Medicine/Roski Eye Institute, Los Angeles, CA USA
| | - April Y Maa
- Department of Ophthalmology, Emory University School of Medicine, Atlanta, GA, USA; Regional Telehealth Services, Technology-based Eye Care Services (TECS) division, Veterans Integrated Service Network (VISN) 7, Decatur, GA, USA
| | - Ramasamy Kim
- Aravind Eye Hospital, Madurai, Tamil Nadu, India
| | - Eugene Yu-Chuan Kang
- Department of Ophthalmology, Linkou Medical Center, Chang Gung Memorial Hospital, Taoyuan, Taiwan
| | | | | | | | | | | | | | | | | | - Yun Liu
- Google Health, Palo Alto, CA, USA.
| |
Collapse
|
24
|
Lou YS, Lin CS, Fang WH, Lee CC, Lin C. Extensive deep learning model to enhance electrocardiogram application via latent cardiovascular feature extraction from identity identification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107359. [PMID: 36738606 DOI: 10.1016/j.cmpb.2023.107359] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 12/22/2022] [Accepted: 01/17/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Deep learning models (DLMs) have been successfully applied in biomedicine primarily using supervised learning with large, annotated databases. However, scarce training resources limit the potential of DLMs for electrocardiogram (ECG) analysis. METHODS We have developed a novel pre-training strategy for unsupervised identity identification with an area under the receiver operating characteristic curve (AUC) >0.98. Accordingly, a DLM pre-trained with identity identification can be applied to 70 patient characteristic predictions using transfer learning (TL). These ECG-based patient characteristics were then used for cardiovascular disease (CVD) risk prediction. The DLMs were trained using 507,729 ECGs from 222,473 patients and validated using two independent validation sets (n = 27,824/31,925). RESULTS The DLMs using our method exhibited better performance than directly trained DLMs. Additionally, our DLM performed better than those of previous studies in terms of gender (AUC [internal/external] = 0.982/0.968), age (correlation = 0.886/0.892), low ejection fraction (AUC = 0.942/0.951), and critical markers not addressed previously, including high B-type natriuretic peptide (AUC = 0.921/0.899). Additionally, approximately 50% of the ECG-based characteristics provided significantly more prediction information for cardiovascular risk than real characteristics. CONCLUSIONS This is the first study to use identity identification as a pre-training task for TL in ECG analysis. An extensive exploration of the relationship between ECG and 70 patient characteristics was conducted. Our DLM-enhanced ECG interpretation system extensively advanced ECG-related patient characteristic prediction and mortality risk management for cardiovascular diseases.
Collapse
Affiliation(s)
- Yu-Sheng Lou
- Graduate Institutes of Life Sciences, National Defense Medical Center, Taipei, Taiwan; School of Public Health, National Defense Medical Center, Taipei, Taiwan
| | - Chin-Sheng Lin
- Division of Cardiology, Department of Internal Medicine, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan, R.O.C.; Medical Technology Education Center, School of Medicine, National Defense Medical Center, Taipei, Taiwan, R.O.C
| | - Wen-Hui Fang
- Department of Family and Community Medicine, Department of Internal Medicine, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan, R.O.C
| | - Chia-Cheng Lee
- Department of Medical Informatics, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan, R.O.C.; Division of Colorectal Surgery, Department of Surgery, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan, R.O.C
| | - Chin Lin
- Graduate Institutes of Life Sciences, National Defense Medical Center, Taipei, Taiwan; School of Public Health, National Defense Medical Center, Taipei, Taiwan; Medical Technology Education Center, School of Medicine, National Defense Medical Center, Taipei, Taiwan, R.O.C.; School of Medicine, National Defense Medical Center, Taipei, Taiwan, R.O.C..
| |
Collapse
|
25
|
Iao WC, Zhang W, Wang X, Wu Y, Lin D, Lin H. Deep Learning Algorithms for Screening and Diagnosis of Systemic Diseases Based on Ophthalmic Manifestations: A Systematic Review. Diagnostics (Basel) 2023; 13:diagnostics13050900. [PMID: 36900043 PMCID: PMC10001234 DOI: 10.3390/diagnostics13050900] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 02/16/2023] [Accepted: 02/18/2023] [Indexed: 03/06/2023] Open
Abstract
Deep learning (DL) is the new high-profile technology in medical artificial intelligence (AI) for building screening and diagnosing algorithms for various diseases. The eye provides a window for observing neurovascular pathophysiological changes. Previous studies have proposed that ocular manifestations indicate systemic conditions, revealing a new route in disease screening and management. There have been multiple DL models developed for identifying systemic diseases based on ocular data. However, the methods and results varied immensely across studies. This systematic review aims to summarize the existing studies and provide an overview of the present and future aspects of DL-based algorithms for screening systemic diseases based on ophthalmic examinations. We performed a thorough search in PubMed®, Embase, and Web of Science for English-language articles published until August 2022. Among the 2873 articles collected, 62 were included for analysis and quality assessment. The selected studies mainly utilized eye appearance, retinal data, and eye movements as model input and covered a wide range of systemic diseases such as cardiovascular diseases, neurodegenerative diseases, and systemic health features. Despite the decent performance reported, most models lack disease specificity and public generalizability for real-world application. This review concludes the pros and cons and discusses the prospect of implementing AI based on ocular data in real-world clinical scenarios.
Collapse
Affiliation(s)
- Wai Cheng Iao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Weixing Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Xun Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Yuxuan Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou 570311, China
- Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou 510060, China
- Correspondence:
| |
Collapse
|
26
|
Nagasato D, Sogawa T, Tanabe M, Tabuchi H, Numa S, Oishi A, Ohashi Ikeda H, Tsujikawa A, Maeda T, Takahashi M, Ito N, Miura G, Shinohara T, Egawa M, Mitamura Y. Estimation of Visual Function Using Deep Learning From Ultra-Widefield Fundus Images of Eyes With Retinitis Pigmentosa. JAMA Ophthalmol 2023; 141:305-313. [PMID: 36821134 PMCID: PMC9951103 DOI: 10.1001/jamaophthalmol.2022.6393] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/24/2023]
Abstract
Importance There is no widespread effective treatment to halt the progression of retinitis pigmentosa. Consequently, adequate assessment and estimation of residual visual function are important clinically. Objective To examine whether deep learning can accurately estimate the visual function of patients with retinitis pigmentosa by using ultra-widefield fundus images obtained on concurrent visits. Design, Setting, and Participants Data for this multicenter, retrospective, cross-sectional study were collected between January 1, 2012, and December 31, 2018. This study included 695 consecutive patients with retinitis pigmentosa who were examined at 5 institutions. Each of the 3 types of input images-ultra-widefield pseudocolor images, ultra-widefield fundus autofluorescence images, and both ultra-widefield pseudocolor and fundus autofluorescence images-was paired with 1 of the 31 types of ensemble models constructed from 5 deep learning models (Visual Geometry Group-16, Residual Network-50, InceptionV3, DenseNet121, and EfficientNetB0). We used 848, 212, and 214 images for the training, validation, and testing data, respectively. All data from 1 institution were used for the independent testing data. Data analysis was performed from June 7, 2021, to December 5, 2022. Main Outcomes and Measures The mean deviation on the Humphrey field analyzer, central retinal sensitivity, and best-corrected visual acuity were estimated. The image type-ensemble model combination that yielded the smallest mean absolute error was defined as the model with the best estimation accuracy. After removal of the bias of including both eyes with the generalized linear mixed model, correlations between the actual values of the testing data and the estimated values by the best accuracy model were examined by calculating standardized regression coefficients and P values. Results The study included 1274 eyes of 695 patients. A total of 385 patients were female (55.4%), and the mean (SD) age was 53.9 (17.2) years. Among the 3 types of images, the model using ultra-widefield fundus autofluorescence images alone provided the best estimation accuracy for mean deviation, central sensitivity, and visual acuity. Standardized regression coefficients were 0.684 (95% CI, 0.567-0.802) for the mean deviation estimation, 0.697 (95% CI, 0.590-0.804) for the central sensitivity estimation, and 0.309 (95% CI, 0.187-0.430) for the visual acuity estimation (all P < .001). Conclusions and Relevance Results of this study suggest that the visual function estimation in patients with retinitis pigmentosa from ultra-widefield fundus autofluorescence images using deep learning might help assess disease progression objectively. Findings also suggest that deep learning models might monitor the progression of retinitis pigmentosa efficiently during follow-up.
Collapse
Affiliation(s)
- Daisuke Nagasato
- Department of Ophthalmology, Saneikai Tsukazaki Hospital, Himeji, Japan,Department of Ophthalmology, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan,Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan
| | - Takahiro Sogawa
- Department of Ophthalmology, Saneikai Tsukazaki Hospital, Himeji, Japan
| | - Mao Tanabe
- Department of Ophthalmology, Saneikai Tsukazaki Hospital, Himeji, Japan
| | - Hitoshi Tabuchi
- Department of Ophthalmology, Saneikai Tsukazaki Hospital, Himeji, Japan,Department of Ophthalmology, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan,Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan
| | - Shogo Numa
- Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Akio Oishi
- Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan,Department of Ophthalmology and Visual Sciences, Graduate School of Biomedical Sciences, Nagasaki University, Nagasaki, Japan
| | - Hanako Ohashi Ikeda
- Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Akitaka Tsujikawa
- Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Tadao Maeda
- Research Center, Kobe City Eye Hospital, Kobe, Japan,Laboratory for Retinal Regeneration, RIKEN Center for Biosystems Dynamics Research, Kobe, Japan
| | - Masayo Takahashi
- Research Center, Kobe City Eye Hospital, Kobe, Japan,Laboratory for Retinal Regeneration, RIKEN Center for Biosystems Dynamics Research, Kobe, Japan,Vision Care Inc, Kobe, Japan
| | - Nana Ito
- Department of Ophthalmology and Visual Science, Chiba University Graduate School of Medicine, Chiba, Japan
| | - Gen Miura
- Department of Ophthalmology and Visual Science, Chiba University Graduate School of Medicine, Chiba, Japan
| | - Terumi Shinohara
- Department of Ophthalmology, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan
| | - Mariko Egawa
- Department of Ophthalmology, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan
| | - Yoshinori Mitamura
- Department of Ophthalmology, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan
| |
Collapse
|
27
|
Kumar K, Kumar P, Deb D, Unguresan ML, Muresan V. Artificial Intelligence and Machine Learning Based Intervention in Medical Infrastructure: A Review and Future Trends. Healthcare (Basel) 2023; 11:healthcare11020207. [PMID: 36673575 PMCID: PMC9859198 DOI: 10.3390/healthcare11020207] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 01/01/2023] [Accepted: 01/04/2023] [Indexed: 01/13/2023] Open
Abstract
People in the life sciences who work with Artificial Intelligence (AI) and Machine Learning (ML) are under increased pressure to develop algorithms faster than ever. The possibility of revealing innovative insights and speeding breakthroughs lies in using large datasets integrated on several levels. However, even if there is more data at our disposal than ever, only a meager portion is being filtered, interpreted, integrated, and analyzed. The subject of this technology is the study of how computers may learn from data and imitate human mental processes. Both an increase in the learning capacity and the provision of a decision support system at a size that is redefining the future of healthcare are enabled by AI and ML. This article offers a survey of the uses of AI and ML in the healthcare industry, with a particular emphasis on clinical, developmental, administrative, and global health implementations to support the healthcare infrastructure as a whole, along with the impact and expectations of each component of healthcare. Additionally, possible future trends and scopes of the utilization of this technology in medical infrastructure have also been discussed.
Collapse
Affiliation(s)
- Kamlesh Kumar
- Department of Electrical and Computer Science Engineering, Institute of Infrastructure Technology Research And Management, Ahmedabad 380026, India
| | - Prince Kumar
- Department of Electrical and Computer Science Engineering, Institute of Infrastructure Technology Research And Management, Ahmedabad 380026, India
| | - Dipankar Deb
- Department of Electrical and Computer Science Engineering, Institute of Infrastructure Technology Research And Management, Ahmedabad 380026, India
- Correspondence:
| | | | - Vlad Muresan
- Department of Automation, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
| |
Collapse
|
28
|
Wan C, Hua R, Li K, Hong X, Fang D, Yang W. Automatic Diagnosis of Different Types of Retinal Vein Occlusion Based on Fundus Images. INT J INTELL SYST 2023; 2023. [DOI: 10.1155/2023/1587410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 08/31/2023] [Indexed: 01/22/2024]
Abstract
Retinal vein occlusion (RVO) is the second common cause of blindness following diabetic retinopathy. The manual screening of fundus images to detect RVO is time consuming. Deep‐learning techniques have been used for screening RVO due to their outstanding performance in many applications. However, unlike other images, medical images have smaller lesions, which require a more elaborate approach. To provide patients with an accurate diagnosis, followed by timely and effective treatment, we developed an intelligent method for automatic RVO screening on fundus images. Swin Transformer learns the hierarchy of low‐to high‐level features like the convolutional neural network. However, Swin Transformer extracts features from fundus images through attention modules, which pay more attention to the interrelationship between the features and each other. The model is more universal, does not rely entirely on the data itself, and focuses not only on local information but has a diffusion mechanism from local to global. To suppress overfitting, we adopt a regularization strategy, label smoothing, which uses one‐hot to add noise to reduce the weight of the categories of true sample labels when calculating the loss function. The choice of different models using a 5‐fold cross‐validation on our own datasets indicates that Swin Transformer performs better. The accuracy of classifying all datasets is 98.75 ± 0.000, and the accuracy of identifying MRVO, CRVO, BRVO, and normal, using the method proposed in the paper, is 94.49 ± 0.094, 99.98 ± 0.015, 98.88 ± 0.08, and 99.42 ± 0.012, respectively. The method will be useful to diagnose RVO and help decide grade through fundus images, which has the potency to provide patients with further diagnosis and treatment.
Collapse
|
29
|
Zhao Y, Yu R, Sun C, Fan W, Zou H, Chen X, Huang Y, Yuan R. Nomogram model predicts the risk of visual impairment in diabetic retinopathy: a retrospective study. BMC Ophthalmol 2022; 22:478. [PMID: 36482340 PMCID: PMC9733396 DOI: 10.1186/s12886-022-02710-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Accepted: 11/28/2022] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND To develop a model for predicting the risk of visual impairment in diabetic retinopathy (DR) by a nomogram. METHODS Patients with DR who underwent both optical coherence tomography angiography (OCTA) and fundus fluorescein angiography (FFA) were retrospectively enrolled. FFA was conducted for DR staging, swept-source optical coherence tomography (SS-OCT) of the macula and 3*3-mm blood flow imaging by OCTA to observe retinal structure and blood flow parameters. We defined a logarithm of the minimum angle of resolution visual acuity (LogMAR VA) ≥0.5 as visual impairment, and the characteristics correlated with VA were screened using binary logistic regression. The selected factors were then entered into a multivariate binary stepwise regression, and a nomogram was developed to predict visual impairment risk. Finally, the model was validated using the area under the receiver operating characteristic (ROC) curve (AUC), calibration plots, decision curve analysis (DCA), and clinical impact curve (CIC). RESULTS A total of 29 parameters were included in the analysis, and 13 characteristics were used to develop a nomogram model. Finally, diabetic macular ischaemia (DMI) grading, disorganization of the retinal inner layers (DRIL), outer layer disruption, and the vessel density of choriocapillaris layer inferior (SubVD) were found to be statistically significant (P < 0.05). The model was found to have good accuracy based on the ROC (AUC = 0.931) and calibration curves (C-index = 0.930). The DCA showed that risk threshold probabilities in the (3-91%) interval models can be used to guide clinical practice, and the proportion of people at risk at each threshold probability is illustrated by the CIC. CONCLUSION The nomogram model for predicting visual impairment in DR patients demonstrated good accuracy and utility, and it can be used to guide clinical practice. TRIAL REGISTRATION Chinese Clinical Trial Registry, ChiCTR2200059835. Registered 12 May 2022, https://www.chictr.org.cn/edit.aspx?pid=169290&htm=4.
Collapse
Affiliation(s)
- Yuancheng Zhao
- grid.410570.70000 0004 1760 6682Department of Ophthalmology, the Second Affiliated Hospital of Army Medical University, 183#, Xinqiaozheng St., Shapingba District, Chongqing, 400037 People’s Republic of China
| | - Rentao Yu
- grid.452206.70000 0004 1758 417XDepartment of Dermatology, the First Affiliated Hospital of Chongqing Medical University, 1#, Youyi Road, Yuanjiagang, Yuzhong District, Chongqing, China
| | - Chao Sun
- grid.410570.70000 0004 1760 6682Department of Ophthalmology, the Second Affiliated Hospital of Army Medical University, 183#, Xinqiaozheng St., Shapingba District, Chongqing, 400037 People’s Republic of China
| | - Wei Fan
- grid.410570.70000 0004 1760 6682Department of Ophthalmology, the Second Affiliated Hospital of Army Medical University, 183#, Xinqiaozheng St., Shapingba District, Chongqing, 400037 People’s Republic of China
| | - Huan Zou
- grid.410570.70000 0004 1760 6682Department of Ophthalmology, the Second Affiliated Hospital of Army Medical University, 183#, Xinqiaozheng St., Shapingba District, Chongqing, 400037 People’s Republic of China
| | - Xiaofan Chen
- grid.410570.70000 0004 1760 6682Department of Ophthalmology, the Second Affiliated Hospital of Army Medical University, 183#, Xinqiaozheng St., Shapingba District, Chongqing, 400037 People’s Republic of China
| | - Yanming Huang
- grid.410570.70000 0004 1760 6682Department of Ophthalmology, the Second Affiliated Hospital of Army Medical University, 183#, Xinqiaozheng St., Shapingba District, Chongqing, 400037 People’s Republic of China
| | - Rongdi Yuan
- grid.410570.70000 0004 1760 6682Department of Ophthalmology, the Second Affiliated Hospital of Army Medical University, 183#, Xinqiaozheng St., Shapingba District, Chongqing, 400037 People’s Republic of China
| |
Collapse
|
30
|
Eyeing severe diabetes upfront. Nat Biomed Eng 2022; 6:1321-1322. [PMID: 35411115 DOI: 10.1038/s41551-022-00879-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
|
31
|
Predicting demographics from meibography using deep learning. Sci Rep 2022; 12:15701. [PMID: 36127431 PMCID: PMC9489726 DOI: 10.1038/s41598-022-18933-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 08/22/2022] [Indexed: 11/08/2022] Open
Abstract
This study introduces a deep learning approach to predicting demographic features from meibography images. A total of 689 meibography images with corresponding subject demographic data were used to develop a deep learning model for predicting gland morphology and demographics from images. The model achieved on average 77%, 76%, and 86% accuracies for predicting Meibomian gland morphological features, subject age, and ethnicity, respectively. The model was further analyzed to identify the most highly weighted gland morphological features used by the algorithm to predict demographic characteristics. The two most important gland morphological features for predicting age were the percent area of gland atrophy and the percentage of ghost glands. The two most important morphological features for predicting ethnicity were gland density and the percentage of ghost glands. The approach offers an alternative to traditional associative modeling to identify relationships between Meibomian gland morphological features and subject demographic characteristics. This deep learning methodology can currently predict demographic features from de-identified meibography images with better than 75% accuracy, a number which is highly likely to improve in future models using larger training datasets, which has significant implications for patient privacy in biomedical imaging.
Collapse
|
32
|
Kim BR, Yoo TK, Kim HK, Ryu IH, Kim JK, Lee IS, Kim JS, Shin DH, Kim YS, Kim BT. Oculomics for sarcopenia prediction: a machine learning approach toward predictive, preventive, and personalized medicine. EPMA J 2022; 13:367-382. [PMID: 36061832 PMCID: PMC9437169 DOI: 10.1007/s13167-022-00292-3] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Accepted: 07/25/2022] [Indexed: 12/08/2022]
Abstract
Aims Sarcopenia is characterized by a gradual loss of skeletal muscle mass and strength with increased adverse outcomes. Recently, large-scale epidemiological studies have demonstrated a relationship between several chronic disorders and ocular pathological conditions using an oculomics approach. We hypothesized that sarcopenia can be predicted through eye examinations, without invasive tests or radiologic evaluations in the context of predictive, preventive, and personalized medicine (PPPM/3PM). Methods We analyzed data from the Korean National Health and Nutrition Examination Survey (KNHANES). The training set (80%, randomly selected from 2008 to 2010) data were used to construct the machine learning models. Internal (20%, randomly selected from 2008 to 2010) and external (from the KNHANES 2011) validation sets were used to assess the ability to predict sarcopenia. We included 8092 participants in the final dataset. Machine learning models (XGBoost) were trained on ophthalmological examinations and demographic factors to detect sarcopenia. Results In the exploratory analysis, decreased levator function (odds ratio [OR], 1.41; P value <0.001), cataracts (OR, 1.31; P value = 0.013), and age-related macular degeneration (OR, 1.38; P value = 0.026) were associated with an increased risk of sarcopenia in men. In women, an increased risk of sarcopenia was associated with blepharoptosis (OR, 1.23; P value = 0.038) and cataracts (OR, 1.29; P value = 0.010). The XGBoost technique showed areas under the receiver operating characteristic curves (AUCs) of 0.746 and 0.762 in men and women, respectively. The external validation achieved AUCs of 0.751 and 0.785 for men and women, respectively. For practical and fast hands-on experience with the predictive model for practitioners who may be willing to test the whole idea of sarcopenia prediction based on oculomics data, we developed a simple web-based calculator application (https://knhanesoculomics.github.io/sarcopenia) to predict the risk of sarcopenia and facilitate screening, based on the model established in this study. Conclusion Sarcopenia is treatable before the vicious cycle of sarcopenia-related deterioration begins. Therefore, early identification of individuals at a high risk of sarcopenia is essential in the context of PPPM. Our oculomics-based approach provides an effective strategy for sarcopenia prediction. The proposed method shows promise in significantly increasing the number of patients diagnosed with sarcopenia, potentially facilitating earlier intervention. Through patient oculometric monitoring, various pathological factors related to sarcopenia can be simultaneously analyzed, and doctors can provide personalized medical services according to each cause. Further studies are needed to confirm whether such a prediction algorithm can be used in real-world clinical settings to improve the diagnosis of sarcopenia. Supplementary Information The online version contains supplementary material available at 10.1007/s13167-022-00292-3.
Collapse
Affiliation(s)
- Bo Ram Kim
- Department of Ophthalmology, Hangil Eye Hospital, Incheon, Republic of Korea
| | - Tae Keun Yoo
- B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, Republic of Korea
- VISUWORKS, Seoul, Republic of Korea
| | - Hong Kyu Kim
- Department of Ophthalmology, Dankook University College of Medicine, Dankook University Hospital, Cheonan, Republic of Korea
| | - Ik Hee Ryu
- B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, Republic of Korea
- VISUWORKS, Seoul, Republic of Korea
| | - Jin Kuk Kim
- B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, Republic of Korea
- VISUWORKS, Seoul, Republic of Korea
| | - In Sik Lee
- B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, Republic of Korea
| | | | | | - Young-Sang Kim
- Department of Family Medicine, CHA Bundang Medical Centre, CHA University, Seongnam, Republic of Korea
| | - Bom Taeck Kim
- Department of Family Practice & Community Health, Ajou University School of Medicine, Suwon, Gyeonggi-do 16499 Republic of Korea
| |
Collapse
|
33
|
Nderitu P, Nunez do Rio JM, Webster ML, Mann SS, Hopkins D, Cardoso MJ, Modat M, Bergeles C, Jackson TL. Automated image curation in diabetic retinopathy screening using deep learning. Sci Rep 2022; 12:11196. [PMID: 35778615 PMCID: PMC9249740 DOI: 10.1038/s41598-022-15491-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 06/24/2022] [Indexed: 11/20/2022] Open
Abstract
Diabetic retinopathy (DR) screening images are heterogeneous and contain undesirable non-retinal, incorrect field and ungradable samples which require curation, a laborious task to perform manually. We developed and validated single and multi-output laterality, retinal presence, retinal field and gradability classification deep learning (DL) models for automated curation. The internal dataset comprised of 7743 images from DR screening (UK) with 1479 external test images (Portugal and Paraguay). Internal vs external multi-output laterality AUROC were right (0.994 vs 0.905), left (0.994 vs 0.911) and unidentifiable (0.996 vs 0.680). Retinal presence AUROC were (1.000 vs 1.000). Retinal field AUROC were macula (0.994 vs 0.955), nasal (0.995 vs 0.962) and other retinal field (0.997 vs 0.944). Gradability AUROC were (0.985 vs 0.918). DL effectively detects laterality, retinal presence, retinal field and gradability of DR screening images with generalisation between centres and populations. DL models could be used for automated image curation within DR screening.
Collapse
Affiliation(s)
- Paul Nderitu
- Section of Ophthalmology, King's College London, London, UK.
- King's Ophthalmology Research Unit, King's College Hospital, London, UK.
| | | | - Ms Laura Webster
- South East London Diabetic Eye Screening Programme, Guy's and St Thomas' Foundation Trust, London, UK
| | - Samantha S Mann
- South East London Diabetic Eye Screening Programme, Guy's and St Thomas' Foundation Trust, London, UK
- Department of Ophthalmology, Guy's and St Thomas' Foundation Trust, London, UK
| | - David Hopkins
- Department of Diabetes, School of Life Course Sciences, King's College London, London, UK
- Institute of Diabetes, Endocrinology and Obesity, King's Health Partners, London, UK
| | - M Jorge Cardoso
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Marc Modat
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Christos Bergeles
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Timothy L Jackson
- Section of Ophthalmology, King's College London, London, UK
- King's Ophthalmology Research Unit, King's College Hospital, London, UK
| |
Collapse
|