1
|
Cai Y, Chen X, Guo C. AMRM: Attention-based mask reconstruction module for multi-classification of breast cancer histopathological images. Med Eng Phys 2025; 139:104335. [PMID: 40306885 DOI: 10.1016/j.medengphy.2025.104335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2024] [Revised: 03/21/2025] [Accepted: 04/04/2025] [Indexed: 05/02/2025]
Abstract
Nowadays, breast cancer is a leading cause of cancer-related mortality among women globally. Approximately 10% to 15% of breast cancer patients fail to undergo timely screening, resulting in a missed opportunity for optimal treatment. Computer-aided diagnosis (CAD) systems have been used successfully in breast cancer diagnosis. Nevertheless, current systems have encountered difficulties in achieving a high degree of accuracy, with the majority of research efforts focusing on the binary classification that distinguishes benign from malignant. Different subtypes of breast cancer require different targeted therapeutic approaches. Therefore, the precise classification of the breast cancer subtype has a major impact on treatment decisions. To improve the accuracy of breast cancer multi-classification, a novel Attention-based Mask Reconstruction Module (AMRM) is proposed to improve the performance of the model. AMRM extracts features from breast cancer histopathological images through the attention module and then performs mask reconstruction to generate reconstructed features. These reconstructed features were used in a multi-classification task to accurately classify histopathological images of breast cancer. AMRM enables the network to effectively identify background and foreground in histopathological images, reduce background interference, improve adaptability to background changes, align the features extracted by the model with the pathologist's expectations, and improve classification accuracy. Results from experiments conducted on the BreakHis dataset show that the inclusion of AMRM resulted in a significant improvement in multi-classification accuracy for the AlexNet, VGG11, ResNet-50 and Data-efficient Image Transformer (DeiT) models, reaching 88.48%, 93.40%, 96.49% and 94.10% respectively. Compared to the baseline model, accuracy increased by 8.28%, 2.11%, 1.27% and 1.26% respectively, demonstrating a significant improvement.
Collapse
Affiliation(s)
- Yanguang Cai
- School of Automation, Guangdong University of Technology, Guangzhou, 511400, Guangdong, China; School of Intelligent Manufacturing and Electrical Engineering, Guangzhou Institute of Science and Technology, Guangzhou, 510540, Guangdong, China
| | - Xiang Chen
- School of Automation, Guangdong University of Technology, Guangzhou, 511400, Guangdong, China.
| | - Changle Guo
- School of Automation, Guangdong University of Technology, Guangzhou, 511400, Guangdong, China
| |
Collapse
|
2
|
Dutta K, Pal D, Li S, Shyam C, Shoghi KI. Corr-A-Net: Interpretable Attention-Based Correlated Feature Learning framework for predicting of HER2 Score in Breast Cancer from H&E Images. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2025:2025.04.22.25326227. [PMID: 40313277 PMCID: PMC12045401 DOI: 10.1101/2025.04.22.25326227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 05/03/2025]
Abstract
Human epidermal growth factor receptor 2 (HER2) expression is a critical biomarker for assessing breast cancer (BC) severity and guiding targeted anti-HER2 therapies. The standard method for measuring HER2 expression is manual assessment of IHC slides by pathologists, which is both time intensive and prone to inter- and intra-observer variability. To address these challenges, we developed an interpretable deep-learning pipeline with Correlational Attention Neural Network (Corr-A-Net) to predict HER2 score from H&E images. Each prediction was accompanied with a confidence score generated by the surrogate confidence score estimation network trained using incentivized mechanism. The shared correlated representations generated using the attention mechanism of Corr-A-Net achieved the best predictive accuracy of 0.93 and AUC-ROC of 0.98. Additionally, correlated representations demonstrated the highest mean effective confidence (MEC) score of 0.85 indicating robust confidence level estimation for prediction. The Corr-A-Net can have profound implications in facilitating prediction of HER2 status from H&E images.
Collapse
Affiliation(s)
- Kaushik Dutta
- Imaging Science Program, Washington University in St Louis, St Louis, MO USA
- Mallinckrodt Institute of Radiology, Washington University in St Louis, St Louis, MO USA
| | - Debojyoti Pal
- Imaging Science Program, Washington University in St Louis, St Louis, MO USA
- Mallinckrodt Institute of Radiology, Washington University in St Louis, St Louis, MO USA
| | - Suya Li
- Imaging Science Program, Washington University in St Louis, St Louis, MO USA
- Mallinckrodt Institute of Radiology, Washington University in St Louis, St Louis, MO USA
| | - Chandresh Shyam
- Mallinckrodt Institute of Radiology, Washington University in St Louis, St Louis, MO USA
| | - Kooresh I. Shoghi
- Imaging Science Program, Washington University in St Louis, St Louis, MO USA
- Mallinckrodt Institute of Radiology, Washington University in St Louis, St Louis, MO USA
- Department of Biomedical Engineering, Washington University in St Louis, St Louis, MO USA
- Lead contact
| |
Collapse
|
3
|
Karasayar AHD, Kulaç İ, Kapucuoğlu N. Advances in Breast Cancer Care: The Role of Artificial Intelligence and Digital Pathology in Precision Medicine. Eur J Breast Health 2025; 21:93-100. [PMID: 40028897 PMCID: PMC11934827 DOI: 10.4274/ejbh.galenos.2025.2024-12-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2024] [Accepted: 02/17/2025] [Indexed: 03/05/2025]
Abstract
Artificial intelligence (AI) and digital pathology are transforming breast cancer management by addressing the limitations inherent in traditional histopathological methods. The application of machine learning algorithms has enhanced the ability of AI systems to classify breast cancer subtypes, grade tumors, and quantify key biomarkers, thereby improving diagnostic accuracy and prognostic precision. Furthermore, AI-powered image analysis has demonstrated superiority in detecting lymph node metastases, contributing to more precise staging, treatment planning, and reduced evaluation time. The ability of AI to predict molecular markers, including human epidermal growth factor receptor 2 status, BRCA mutations and homologus recombination deficiency, offers substantial potential for the development of personalized treatment strategies. A collaborative approach between pathologists and AI systems is essential to fully harness the potential of this technology. Although AI provides automation and objective analysis, human expertise remains indispensable for the interpretation of results and clinical decision-making. This partnership is anticipated to transform breast cancer care by enhancing patient outcomes and optimizing treatment approaches.
Collapse
Affiliation(s)
- Ayşe Hümeyra Dur Karasayar
- Graduate School of Health Sciences, Koç University Faculty of Medicine, İstanbul, Turkey
- Department of Pathology, Başakşehir Çam and Sakura Hospital, İstanbul, Turkey
| | - İbrahim Kulaç
- Graduate School of Health Sciences, Koç University Faculty of Medicine, İstanbul, Turkey
- Koç University & İş Bank Artificial Intelligence Center, Koç University, İstanbul, Turkey
- Research Center for Translational Medicine, Koç University, İstanbul, Turkey
- Department of Pathology, Koç University Faculty of Medicine, İstanbul, Turkey
| | - Nilgün Kapucuoğlu
- Department of Pathology, Koç University Faculty of Medicine, İstanbul, Turkey
| |
Collapse
|
4
|
Pantanowitz L, Pearce T, Abukhiran I, Hanna M, Wheeler S, Soong TR, Tafti AP, Pantanowitz J, Lu MY, Mahmood F, Gu Q, Rashidi HH. Nongenerative Artificial Intelligence in Medicine: Advancements and Applications in Supervised and Unsupervised Machine Learning. Mod Pathol 2025; 38:100680. [PMID: 39675426 DOI: 10.1016/j.modpat.2024.100680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2024] [Revised: 11/26/2024] [Accepted: 11/27/2024] [Indexed: 12/17/2024]
Abstract
The use of artificial intelligence (AI) within pathology and health care has advanced extensively. We have accordingly witnessed an increased adoption of various AI tools that are transforming our approach to clinical decision support, personalized medicine, predictive analytics, automation, and discovery. The familiar and more reliable AI tools that have been incorporated within health care thus far fall mostly under the nongenerative AI domain, which includes supervised and unsupervised machine learning (ML) techniques. This review article explores how such nongenerative AI methods, rooted in traditional rules-based systems, enhance diagnostic accuracy, efficiency, and consistency within medicine. Key concepts and the application of supervised learning models (ie, classification and regression) such as decision trees, support vector machines, linear and logistic regression, K-nearest neighbor, and neural networks are explained along with the newer landscape of neural network-based nongenerative foundation models. Unsupervised learning techniques, including clustering, dimensionality reduction, and anomaly detection, are also discussed for their roles in uncovering novel disease subtypes or identifying outliers. Technical details related to the application of nongenerative AI algorithms for analyzing whole slide images are also highlighted. The performance, explainability, and reliability of nongenerative AI models essential for clinical decision-making is also reviewed, as well as challenges related to data quality, model interpretability, and risk of data drift. An understanding of which AI-ML models to employ and which shortcomings need to be addressed is imperative to safely and efficiently leverage, integrate, and monitor these traditional AI tools in clinical practice and research.
Collapse
Affiliation(s)
- Liron Pantanowitz
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania; Computational Pathology and AI Center of Excellence (CPACE), University of Pittsburgh, School of Medicine, Pittsburgh, Pennsylvania.
| | - Thomas Pearce
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania; Computational Pathology and AI Center of Excellence (CPACE), University of Pittsburgh, School of Medicine, Pittsburgh, Pennsylvania
| | - Ibrahim Abukhiran
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania; Computational Pathology and AI Center of Excellence (CPACE), University of Pittsburgh, School of Medicine, Pittsburgh, Pennsylvania
| | - Matthew Hanna
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania; Computational Pathology and AI Center of Excellence (CPACE), University of Pittsburgh, School of Medicine, Pittsburgh, Pennsylvania
| | - Sarah Wheeler
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania; Computational Pathology and AI Center of Excellence (CPACE), University of Pittsburgh, School of Medicine, Pittsburgh, Pennsylvania
| | - T Rinda Soong
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania; Computational Pathology and AI Center of Excellence (CPACE), University of Pittsburgh, School of Medicine, Pittsburgh, Pennsylvania
| | - Ahmad P Tafti
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania; Computational Pathology and AI Center of Excellence (CPACE), University of Pittsburgh, School of Medicine, Pittsburgh, Pennsylvania; Health Informatics, School of Health and Rehabilitation Services, University of Pittsburgh, Pittsburgh, Pennsylvania
| | | | - Ming Y Lu
- Department of Pathology, Massachusetts General Brigham Hospital, Harvard Medical School, Boston, Massachusetts; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, Massachusetts; Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Faisal Mahmood
- Department of Pathology, Massachusetts General Brigham Hospital, Harvard Medical School, Boston, Massachusetts; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, Massachusetts; Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Qiangqiang Gu
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania; Computational Pathology and AI Center of Excellence (CPACE), University of Pittsburgh, School of Medicine, Pittsburgh, Pennsylvania
| | - Hooman H Rashidi
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania; Computational Pathology and AI Center of Excellence (CPACE), University of Pittsburgh, School of Medicine, Pittsburgh, Pennsylvania.
| |
Collapse
|
5
|
Purma V, Srinath S, Srirangarajan S, Kakkar A, Prathosh AP. GenSelfDiff-HIS: Generative Self-Supervision Using Diffusion for Histopathological Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:618-631. [PMID: 39222449 DOI: 10.1109/tmi.2024.3453492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
Histopathological image segmentation is a laborious and time-intensive task, often requiring analysis from experienced pathologists for accurate examinations. To reduce this burden, supervised machine-learning approaches have been adopted using large-scale annotated datasets for histopathological image analysis. However, in several scenarios, the availability of large-scale annotated data is a bottleneck while training such models. Self-supervised learning (SSL) is an alternative paradigm that provides some respite by constructing models utilizing only the unannotated data which is often abundant. The basic idea of SSL is to train a network to perform one or many pseudo or pretext tasks on unannotated data and use it subsequently as the basis for a variety of downstream tasks. It is seen that the success of SSL depends critically on the considered pretext task. While there have been many efforts in designing pretext tasks for classification problems, there have not been many attempts on SSL for histopathological image segmentation. Motivated by this, we propose an SSL approach for segmenting histopathological images via generative diffusion models. Our method is based on the observation that diffusion models effectively solve an image-to-image translation task akin to a segmentation task. Hence, we propose generative diffusion as the pretext task for histopathological image segmentation. We also utilize a multi-loss function-based fine-tuning for the downstream task. We validate our method using several metrics on two publicly available datasets along with a newly proposed head and neck (HN) cancer dataset containing Hematoxylin and Eosin (H&E) stained images along with annotations.
Collapse
|
6
|
Rai HM, Yoo J, Agarwal S, Agarwal N. LightweightUNet: Multimodal Deep Learning with GAN-Augmented Imaging Data for Efficient Breast Cancer Detection. Bioengineering (Basel) 2025; 12:73. [PMID: 39851348 PMCID: PMC11761908 DOI: 10.3390/bioengineering12010073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2024] [Revised: 01/06/2025] [Accepted: 01/08/2025] [Indexed: 01/26/2025] Open
Abstract
Breast cancer ranks as the second most prevalent cancer globally and is the most frequently diagnosed cancer among women; therefore, early, automated, and precise detection is essential. Most AI-based techniques for breast cancer detection are complex and have high computational costs. Hence, to overcome this challenge, we have presented the innovative LightweightUNet hybrid deep learning (DL) classifier for the accurate classification of breast cancer. The proposed model boasts a low computational cost due to its smaller number of layers in its architecture, and its adaptive nature stems from its use of depth-wise separable convolution. We have employed a multimodal approach to validate the model's performance, using 13,000 images from two distinct modalities: mammogram imaging (MGI) and ultrasound imaging (USI). We collected the multimodal imaging datasets from seven different sources, including the benchmark datasets DDSM, MIAS, INbreast, BrEaST, BUSI, Thammasat, and HMSS. Since the datasets are from various sources, we have resized them to the uniform size of 256 × 256 pixels and normalized them using the Box-Cox transformation technique. Since the USI dataset is smaller, we have applied the StyleGAN3 model to generate 10,000 synthetic ultrasound images. In this work, we have performed two separate experiments: the first on a real dataset without augmentation and the second on a real + GAN-augmented dataset using our proposed method. During the experiments, we used a 5-fold cross-validation method, and our proposed model obtained good results on the real dataset (87.16% precision, 86.87% recall, 86.84% F1-score, and 86.87% accuracy) without adding any extra data. Similarly, the second experiment provides better performance on the real + GAN-augmented dataset (96.36% precision, 96.35% recall, 96.35% F1-score, and 96.35% accuracy). This multimodal approach, which utilizes LightweightUNet, enhances the performance by 9.20% in precision, 9.48% in recall, 9.51% in F1-score, and a 9.48% increase in accuracy on the combined dataset. The LightweightUNet model we proposed works very well thanks to a creative network design, adding fake images to the data, and a multimodal training method. These results show that the model has a lot of potential for use in clinical settings.
Collapse
Affiliation(s)
- Hari Mohan Rai
- School of Computing, Gachon University, Seongnam 13120, Republic of Korea;
| | - Joon Yoo
- School of Computing, Gachon University, Seongnam 13120, Republic of Korea;
| | - Saurabh Agarwal
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea
| | - Neha Agarwal
- School of Chemical Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea
| |
Collapse
|
7
|
Rai HM, Yoo J, Dashkevych S. Transformative Advances in AI for Precise Cancer Detection: A Comprehensive Review of Non-Invasive Techniques. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING 2025. [DOI: 10.1007/s11831-024-10219-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 12/07/2024] [Indexed: 03/02/2025]
|
8
|
Yaqoob A, Mir MA, Jagannadha Rao GVV, Tejani GG. Transforming Cancer Classification: The Role of Advanced Gene Selection. Diagnostics (Basel) 2024; 14:2632. [PMID: 39682540 DOI: 10.3390/diagnostics14232632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2024] [Revised: 11/18/2024] [Accepted: 11/19/2024] [Indexed: 12/18/2024] Open
Abstract
Background/Objectives: Accurate classification in cancer research is vital for devising effective treatment strategies. Precise cancer classification depends significantly on selecting the most informative genes from high-dimensional datasets, a task made complex by the extensive data involved. This study introduces the Two-stage MI-PSA Gene Selection algorithm, a novel approach designed to enhance cancer classification accuracy through robust gene selection methods. Methods: The proposed method integrates Mutual Information (MI) and Particle Swarm Optimization (PSO) for gene selection. In the first stage, MI acts as an initial filter, identifying genes rich in cancer-related information. In the second stage, PSO refines this selection to pinpoint an optimal subset of genes for accurate classification. Results: The experimental findings reveal that the MI-PSA method achieves a best classification accuracy of 99.01% with a selected subset of 19 genes, substantially outperforming the MI and SVM methods, which attain best accuracies of 93.44% and 91.26%, respectively, for the same gene count. Furthermore, MI-PSA demonstrates superior performance in terms of average and worst-case accuracy, underscoring its robustness and reliability. Conclusions: The MI-PSA algorithm presents a powerful approach for identifying critical genes essential for precise cancer classification, advancing both our understanding and management of this complex disease.
Collapse
Affiliation(s)
- Abrar Yaqoob
- School of Advanced Science and Language, VIT Bhopal University, Kothrikalan, Sehore, Bhopal 466114, India
| | - Mushtaq Ahmad Mir
- Department of Clinical Laboratory Sciences, College of Applied Medical Sciences, King Khalid University, Abha 61421, Saudi Arabia
| | | | - Ghanshyam G Tejani
- Department of Industrial Engineering and Management, Yuan Ze University, Taoyuan City 320315, Taiwan
- Jadara Research Center, Jadara University, Irbid 21110, Jordan
| |
Collapse
|
9
|
Hopson JB, Flaus A, McGinnity CJ, Neji R, Reader AJ, Hammers A. Deep Convolutional Backbone Comparison for Automated PET Image Quality Assessment. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2024; 8:893-901. [PMID: 39404656 PMCID: PMC7616552 DOI: 10.1109/trpms.2024.3436697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2024]
Abstract
Pretraining deep convolutional network mappings using natural images helps with medical imaging analysis tasks; this is important given the limited number of clinically-annotated medical images. Many two-dimensional pretrained backbone networks, however, are currently available. This work compared 18 different backbones from 5 architecture groups (pretrained on ImageNet) for the task of assessing [18F]FDG brain Positron Emission Transmission (PET) image quality (reconstructed at seven simulated doses), based on three clinical image quality metrics (global quality rating, pattern recognition, and diagnostic confidence). Using two-dimensional randomly sampled patches, up to eight patients (at three dose levels each) were used for training, with three separate patient datasets used for testing. Each backbone was trained five times with the same training and validation sets, and with six cross-folds. Training only the final fully connected layer (with ~6,000-20,000 trainable parameters) achieved a test mean-absolute-error of ~0.5 (which was within the intrinsic uncertainty of clinical scoring). To compare "classical" and over-parameterized regimes, the pretrained weights of the last 40% of the network layers were then unfrozen. The mean-absolute-error fell below 0.5 for 14 out of the 18 backbones assessed, including two that previously failed to train. Generally, backbones with residual units (e.g. DenseNets and ResNetV2s), were suited to this task, in terms of achieving the lowest mean-absolute-error at test time (~0.45 - 0.5). This proof-of-concept study shows that over-parameterization may also be important for automated PET image quality assessments.
Collapse
Affiliation(s)
| | - Anthime Flaus
- King's College London & Guy's and St Thomas' PET Centre, King's College London
| | - Colm J McGinnity
- King's College London & Guy's and St Thomas' PET Centre, King's College London
| | - Radhouene Neji
- Department of Biomedical Engineering, King's College London; Siemens Healthcare Limited
| | | | - Alexander Hammers
- King's College London & Guy's and St Thomas' PET Centre, King's College London
| |
Collapse
|
10
|
Karthiga R, Narasimhan K, V T, M H, Amirtharajan R. Review of AI & XAI-based breast cancer diagnosis methods using various imaging modalities. MULTIMEDIA TOOLS AND APPLICATIONS 2024. [DOI: 10.1007/s11042-024-20271-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 08/27/2024] [Accepted: 09/11/2024] [Indexed: 01/02/2025]
|
11
|
Lubbad MAH, Kurtulus IL, Karaboga D, Kilic K, Basturk A, Akay B, Nalbantoglu OU, Yilmaz OMD, Ayata M, Yilmaz S, Pacal I. A Comparative Analysis of Deep Learning-Based Approaches for Classifying Dental Implants Decision Support System. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2559-2580. [PMID: 38565730 PMCID: PMC11522249 DOI: 10.1007/s10278-024-01086-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/20/2024] [Revised: 02/28/2024] [Accepted: 03/12/2024] [Indexed: 04/04/2024]
Abstract
This study aims to provide an effective solution for the autonomous identification of dental implant brands through a deep learning-based computer diagnostic system. It also seeks to ascertain the system's potential in clinical practices and to offer a strategic framework for improving diagnosis and treatment processes in implantology. This study employed a total of 28 different deep learning models, including 18 convolutional neural network (CNN) models (VGG, ResNet, DenseNet, EfficientNet, RegNet, ConvNeXt) and 10 vision transformer models (Swin and Vision Transformer). The dataset comprises 1258 panoramic radiographs from patients who received implant treatments at Erciyes University Faculty of Dentistry between 2012 and 2023. It is utilized for the training and evaluation process of deep learning models and consists of prototypes from six different implant systems provided by six manufacturers. The deep learning-based dental implant system provided high classification accuracy for different dental implant brands using deep learning models. Furthermore, among all the architectures evaluated, the small model of the ConvNeXt architecture achieved an impressive accuracy rate of 94.2%, demonstrating a high level of classification success.This study emphasizes the effectiveness of deep learning-based systems in achieving high classification accuracy in dental implant types. These findings pave the way for integrating advanced deep learning tools into clinical practice, promising significant improvements in patient care and treatment outcomes.
Collapse
Affiliation(s)
- Mohammed A H Lubbad
- Department of Computer Engineering, Engineering Faculty, Erciyes University, 38039, Kayseri, Turkey.
- Artificial Intelligence and Big Data Application and Research Center, Erciyes University, Kayseri, Turkey.
| | | | - Dervis Karaboga
- Department of Computer Engineering, Engineering Faculty, Erciyes University, 38039, Kayseri, Turkey
- Artificial Intelligence and Big Data Application and Research Center, Erciyes University, Kayseri, Turkey
| | - Kerem Kilic
- Department of Prosthodontics, Dentistry Faculty, Erciyes University, Kayseri, Turkey
| | - Alper Basturk
- Department of Computer Engineering, Engineering Faculty, Erciyes University, 38039, Kayseri, Turkey
- Artificial Intelligence and Big Data Application and Research Center, Erciyes University, Kayseri, Turkey
| | - Bahriye Akay
- Department of Computer Engineering, Engineering Faculty, Erciyes University, 38039, Kayseri, Turkey
- Artificial Intelligence and Big Data Application and Research Center, Erciyes University, Kayseri, Turkey
| | - Ozkan Ufuk Nalbantoglu
- Department of Computer Engineering, Engineering Faculty, Erciyes University, 38039, Kayseri, Turkey
- Artificial Intelligence and Big Data Application and Research Center, Erciyes University, Kayseri, Turkey
| | | | - Mustafa Ayata
- Department of Prosthodontics, Dentistry Faculty, Erciyes University, Kayseri, Turkey
| | - Serkan Yilmaz
- Department of Dentomaxillofacial Radiology, Dentistry Faculty, Erciyes University, Kayseri, Turkey
| | - Ishak Pacal
- Department of Computer Engineering, Engineering Faculty, Igdir University, Igdir, Turkey
- Artificial Intelligence and Big Data Application and Research Center, Erciyes University, Kayseri, Turkey
| |
Collapse
|
12
|
Leblebicioglu Kurtulus I, Lubbad M, Yilmaz OMD, Kilic K, Karaboga D, Basturk A, Akay B, Nalbantoglu U, Yilmaz S, Ayata M, Pacal I. A robust deep learning model for the classification of dental implant brands. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2024; 125:101818. [PMID: 38462066 DOI: 10.1016/j.jormas.2024.101818] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 02/26/2024] [Accepted: 03/07/2024] [Indexed: 03/12/2024]
Abstract
OBJECTIVE In cases where the brands of implants are not known, treatment options can be significantly limited in potential complications arising from implant procedures. This research aims to explore the application of deep learning techniques for the classification of dental implant systems using panoramic radiographs. The primary objective is to assess the superiority of the proposed model in achieving accurate and efficient dental implant classification. MATERIAL AND METHODS A comprehensive analysis was conducted using a diverse set of 25 convolutional neural network (CNN) models, including popular architectures such as VGG16, ResNet-50, EfficientNet, and ConvNeXt. The dataset of 1258 panoramic radiographs from patients who underwent implant treatment at faculty of dentistry was utilized for training and evaluation. Six different dental implant systems were employed as prototypes for the classification task. The precision, recall, F1 score, and support scores for each class have included in the classification accuracy report to ensure accurate and reliable results from the model. RESULTS The experimental results demonstrate that the proposed model consistently outperformed the other evaluated CNN architectures in terms of accuracy, precision, recall, and F1-score. With an impressive accuracy of 95.74 % and high precision and recall rates, the ConvNeXt model showcased its superiority in accurately classifying dental implant systems. Notably, the model's performance was achieved with a relatively smaller number of parameters, indicating its efficiency and speed during inference. CONCLUSION The findings highlight the effectiveness of deep learning techniques, particularly the proposed model, in accurately classifying dental implant systems from panoramic radiographs.
Collapse
Affiliation(s)
| | - Mohammed Lubbad
- Department of Computer Engineering, Faculty of Engineering, Erciyes University, Kayseri, Turkey
| | | | - Kerem Kilic
- Department of Prosthodontics, Faculty of Dentistry, Erciyes University, Kayseri, Turkey
| | - Dervis Karaboga
- Department of Computer Engineering, Faculty of Engineering, Erciyes University, Kayseri, Turkey
| | - Alper Basturk
- Department of Computer Engineering, Faculty of Engineering, Erciyes University, Kayseri, Turkey
| | - Bahriye Akay
- Department of Computer Engineering, Faculty of Engineering, Erciyes University, Kayseri, Turkey
| | - Ufuk Nalbantoglu
- Department of Computer Engineering, Faculty of Engineering, Erciyes University, Kayseri, Turkey
| | - Serkan Yilmaz
- Department of Dentomaxillofacial Radiology, Ministry of Health, Mersin Oral and Dental Health Hospital, Mersin, Turkey
| | - Mustafa Ayata
- Dentos Oral and Dental Health Polyclinic, Kayseri, Turkey
| | - Ishak Pacal
- Department of Computer Engineering, Faculty of Engineering, Igdir University, Igdir, Turkey
| |
Collapse
|
13
|
Abhisheka B, Biswas SK, Purkayastha B. HBMD-Net: Feature Fusion Based Breast Cancer Classification with Class Imbalance Resolution. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1440-1457. [PMID: 38409609 PMCID: PMC11300733 DOI: 10.1007/s10278-024-01046-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Revised: 02/06/2024] [Accepted: 02/09/2024] [Indexed: 02/28/2024]
Abstract
Breast cancer, a widespread global disease, represents a significant threat to women's health and lives, ranking as one of the most vulnerable malignant tumors they face. Many researchers have proposed their computer-aided diagnosis systems for classifying breast cancer. The majority of these approaches primarily utilize deep learning (DL) methods, which are not entirely reliable. These approaches overlook the crucial necessity of incorporating both local and global information for precise tumor detection, despite the fact that the subtle nuances are crucial for precise breast cancer classification. In addition, there are a limited number of publicly available breast cancer datasets, and the ones that are available tend to be imbalanced in nature. Therefore, this paper presents the hybrid breast mass detection-network (HBMD-Net) to address two critical challenges: class imbalance and the need to recognize that relying solely on either global or local features falls short in achieving precise tumor classification. To overcome the problem of class imbalance, HBMD-Net incorporates the borderline synthetic minority over-sampling technique (BSMOTE). Simultaneously, it employs a feature fusion approach, combining features by utilizing ResNet50 to extract deep features that provide global information, while handcrafted features are derived using histogram orientation gradient (HOG), that provide local information. In addition, an ROI segmentation has been implemented to avoid misclassifications. This integrated strategy substantially enhances breast cancer classification performance. Moreover, the proposed method integrates the block matching and 3D (BM3D) denoising filter to effectively eliminate multiplicative noise that has enhanced the performance of the system. The evaluation of the proposed HBMD-Net encompasses two breast ultrasound (BUS) datasets, namely BUSI and UDIAT. The proposed model has demonstrated a satisfactory performance, achieving accuracies of 99.14% and 94.49% respectively.
Collapse
Affiliation(s)
- Barsha Abhisheka
- Computer Science and Engineering, NIT Silchar, Silchar, 788010, Assam, India.
| | - Saroj Kr Biswas
- Computer Science and Engineering, NIT Silchar, Silchar, 788010, Assam, India
| | | |
Collapse
|
14
|
Sureshkumar V, Prasad RSN, Balasubramaniam S, Jagannathan D, Daniel J, Dhanasekaran S. Breast Cancer Detection and Analytics Using Hybrid CNN and Extreme Learning Machine. J Pers Med 2024; 14:792. [PMID: 39201984 PMCID: PMC11355507 DOI: 10.3390/jpm14080792] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Revised: 07/08/2024] [Accepted: 07/15/2024] [Indexed: 09/03/2024] Open
Abstract
Early detection of breast cancer is essential for increasing survival rates, as it is one of the primary causes of death for women globally. Mammograms are extensively used by physicians for diagnosis, but selecting appropriate algorithms for image enhancement, segmentation, feature extraction, and classification remains a significant research challenge. This paper presents a computer-aided diagnosis (CAD)-based hybrid model combining convolutional neural networks (CNN) with a pruned ensembled extreme learning machine (HCPELM) to enhance breast cancer detection, segmentation, feature extraction, and classification. The model employs the rectified linear unit (ReLU) activation function to enhance data analytics after removing artifacts and pectoral muscles, and the HCPELM hybridized with the CNN model improves feature extraction. The hybrid elements are convolutional and fully connected layers. Convolutional layers extract spatial features like edges, textures, and more complex features in deeper layers. The fully connected layers take these features and combine them in a non-linear manner to perform the final classification. ELM performs classification and recognition tasks, aiming for state-of-the-art performance. This hybrid classifier is used for transfer learning by freezing certain layers and modifying the architecture to reduce parameters, easing cancer detection. The HCPELM classifier was trained using the MIAS database and evaluated against benchmark methods. It achieved a breast image recognition accuracy of 86%, outperforming benchmark deep learning models. HCPELM is demonstrating superior performance in early detection and diagnosis, thus aiding healthcare practitioners in breast cancer diagnosis.
Collapse
Affiliation(s)
- Vidhushavarshini Sureshkumar
- Department of Computer Science and Engineering, SRM Institute of Science and Technology, Vadapalani, Chennai 600026, India
| | | | | | - Dhayanithi Jagannathan
- Department of Computer Science and Engineering, Sona College of Technology, Salem 636005, India; (S.B.); (D.J.)
| | - Jayanthi Daniel
- Department of Electronics and Communication Engineering, Rajalakshmi Engineering College, Chennai 602105, India;
| | | |
Collapse
|
15
|
Sohrabei S, Moghaddasi H, Hosseini A, Ehsanzadeh SJ. Investigating the effects of artificial intelligence on the personalization of breast cancer management: a systematic study. BMC Cancer 2024; 24:852. [PMID: 39026174 PMCID: PMC11256548 DOI: 10.1186/s12885-024-12575-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Accepted: 06/27/2024] [Indexed: 07/20/2024] Open
Abstract
BACKGROUND Providing appropriate specialized treatment to the right patient at the right time is considered necessary in cancer management. Targeted therapy tailored to the genetic changes of each breast cancer patient is a desirable feature of precision oncology, which can not only reduce disease progression but also potentially increase patient survival. The use of artificial intelligence alongside precision oncology can help physicians by identifying and selecting more effective treatment factors for patients. METHOD A systematic review was conducted using the PubMed, Embase, Scopus, and Web of Science databases in September 2023. We performed the search strategy with keywords, namely: Breast Cancer, Artificial intelligence, and precision Oncology along with their synonyms in the article titles. Descriptive, qualitative, review, and non-English studies were excluded. The quality assessment of the articles and evaluation of bias were determined based on the SJR journal and JBI indices, as well as the PRISMA2020 guideline. RESULTS Forty-six studies were selected that focused on personalized breast cancer management using artificial intelligence models. Seventeen studies using various deep learning methods achieved a satisfactory outcome in predicting treatment response and prognosis, contributing to personalized breast cancer management. Two studies utilizing neural networks and clustering provided acceptable indicators for predicting patient survival and categorizing breast tumors. One study employed transfer learning to predict treatment response. Twenty-six studies utilizing machine-learning methods demonstrated that these techniques can improve breast cancer classification, screening, diagnosis, and prognosis. The most frequent modeling techniques used were NB, SVM, RF, XGBoost, and Reinforcement Learning. The average area under the curve (AUC) for the models was 0.91. Moreover, the average values for accuracy, sensitivity, specificity, and precision were reported to be in the range of 90-96% for the models. CONCLUSION Artificial intelligence has proven to be effective in assisting physicians and researchers in managing breast cancer treatment by uncovering hidden patterns in complex omics and genetic data. Intelligent processing of omics data through protein and gene pattern classification and the utilization of deep neural patterns has the potential to significantly transform the field of complex disease management.
Collapse
Affiliation(s)
- Solmaz Sohrabei
- Department of Health Information Technology and Management, Medical Informatics, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Hamid Moghaddasi
- Department of Health Information Technology and Management, Medical Informatics, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| | - Azamossadat Hosseini
- Department of Health Information Technology and Management, Health Information Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| | - Seyed Jafar Ehsanzadeh
- Department of English Language, School of Health Management and Information Sciences, Iran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
16
|
Solorzano L, Robertson S, Acs B, Hartman J, Rantalainen M. Ensemble-based deep learning improves detection of invasive breast cancer in routine histopathology images. Heliyon 2024; 10:e32892. [PMID: 39022088 PMCID: PMC11252882 DOI: 10.1016/j.heliyon.2024.e32892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 06/11/2024] [Accepted: 06/11/2024] [Indexed: 07/20/2024] Open
Abstract
Accurate detection of invasive breast cancer (IC) can provide decision support to pathologists as well as improve downstream computational analyses, where detection of IC is a first step. Tissue containing IC is characterized by the presence of specific morphological features, which can be learned by convolutional neural networks (CNN). Here, we compare the use of a single CNN model versus an ensemble of several base models with the same CNN architecture, and we evaluate prediction performance as well as variability across ensemble based model predictions. Two in-house datasets comprising 587 whole slide images (WSI) are used to train an ensemble of ten InceptionV3 models whose consensus is used to determine the presence of IC. A novel visualisation strategy was developed to communicate ensemble agreement spatially. Performance was evaluated in an internal test set with 118 WSIs, and in an additional external dataset (TCGA breast cancer) with 157 WSI. We observed that the ensemble-based strategy outperformed the single CNN-model alternative with respect to accuracy on tile level in 89 % of all WSIs in the test set. The overall accuracy was 0.92 (DICE coefficient, 0.90) for the ensemble model, and 0.85 (DICE coefficient, 0.83) for the single CNN alternative in the internal test set. For TCGA the ensemble outperformed the single CNN in 96.8 % of the WSI, with an accuracy of 0.87 (DICE coefficient 0.89), the single model provides an accuracy of 0.75 (DICE coefficient 0.78). The results suggest that an ensemble-based modeling strategy for breast cancer invasive cancer detection consistently outperforms the conventional single model alternative. Furthermore, visualisation of the ensemble agreement and confusion areas provide direct visual interpretation of the results. High performing cancer detection can provide decision support in the routine pathology setting as well as facilitate downstream computational analyses.
Collapse
Affiliation(s)
- Leslie Solorzano
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | | | - Balazs Acs
- Department of Oncology-Pathology, Karolinska Institutet, Stockholm, Sweden
| | - Johan Hartman
- Department of Oncology-Pathology, Karolinska Institutet, Stockholm, Sweden
| | - Mattias Rantalainen
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
17
|
Ivanova M, Pescia C, Trapani D, Venetis K, Frascarelli C, Mane E, Cursano G, Sajjadi E, Scatena C, Cerbelli B, d’Amati G, Porta FM, Guerini-Rocco E, Criscitiello C, Curigliano G, Fusco N. Early Breast Cancer Risk Assessment: Integrating Histopathology with Artificial Intelligence. Cancers (Basel) 2024; 16:1981. [PMID: 38893102 PMCID: PMC11171409 DOI: 10.3390/cancers16111981] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2024] [Revised: 05/13/2024] [Accepted: 05/17/2024] [Indexed: 06/21/2024] Open
Abstract
Effective risk assessment in early breast cancer is essential for informed clinical decision-making, yet consensus on defining risk categories remains challenging. This paper explores evolving approaches in risk stratification, encompassing histopathological, immunohistochemical, and molecular biomarkers alongside cutting-edge artificial intelligence (AI) techniques. Leveraging machine learning, deep learning, and convolutional neural networks, AI is reshaping predictive algorithms for recurrence risk, thereby revolutionizing diagnostic accuracy and treatment planning. Beyond detection, AI applications extend to histological subtyping, grading, lymph node assessment, and molecular feature identification, fostering personalized therapy decisions. With rising cancer rates, it is crucial to implement AI to accelerate breakthroughs in clinical practice, benefiting both patients and healthcare providers. However, it is important to recognize that while AI offers powerful automation and analysis tools, it lacks the nuanced understanding, clinical context, and ethical considerations inherent to human pathologists in patient care. Hence, the successful integration of AI into clinical practice demands collaborative efforts between medical experts and computational pathologists to optimize patient outcomes.
Collapse
Affiliation(s)
- Mariia Ivanova
- Division of Pathology, European Institute of Oncology IRCCS, 20141 Milan, Italy; (M.I.); (C.P.); (K.V.); (C.F.); (E.M.); (G.C.); (E.S.); (F.M.P.); (E.G.-R.)
| | - Carlo Pescia
- Division of Pathology, European Institute of Oncology IRCCS, 20141 Milan, Italy; (M.I.); (C.P.); (K.V.); (C.F.); (E.M.); (G.C.); (E.S.); (F.M.P.); (E.G.-R.)
| | - Dario Trapani
- Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology IRCCS, 20141 Milan, Italy; (D.T.); (C.C.); (G.C.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Konstantinos Venetis
- Division of Pathology, European Institute of Oncology IRCCS, 20141 Milan, Italy; (M.I.); (C.P.); (K.V.); (C.F.); (E.M.); (G.C.); (E.S.); (F.M.P.); (E.G.-R.)
| | - Chiara Frascarelli
- Division of Pathology, European Institute of Oncology IRCCS, 20141 Milan, Italy; (M.I.); (C.P.); (K.V.); (C.F.); (E.M.); (G.C.); (E.S.); (F.M.P.); (E.G.-R.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Eltjona Mane
- Division of Pathology, European Institute of Oncology IRCCS, 20141 Milan, Italy; (M.I.); (C.P.); (K.V.); (C.F.); (E.M.); (G.C.); (E.S.); (F.M.P.); (E.G.-R.)
| | - Giulia Cursano
- Division of Pathology, European Institute of Oncology IRCCS, 20141 Milan, Italy; (M.I.); (C.P.); (K.V.); (C.F.); (E.M.); (G.C.); (E.S.); (F.M.P.); (E.G.-R.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Elham Sajjadi
- Division of Pathology, European Institute of Oncology IRCCS, 20141 Milan, Italy; (M.I.); (C.P.); (K.V.); (C.F.); (E.M.); (G.C.); (E.S.); (F.M.P.); (E.G.-R.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Cristian Scatena
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy;
| | - Bruna Cerbelli
- Department of Medical-Surgical Sciences and Biotechnologies, Sapienza University of Rome, 00185 Rome, Italy;
| | - Giulia d’Amati
- Department of Radiological, Oncological and Pathological Sciences, Sapienza University of Rome, 00185 Rome, Italy;
| | - Francesca Maria Porta
- Division of Pathology, European Institute of Oncology IRCCS, 20141 Milan, Italy; (M.I.); (C.P.); (K.V.); (C.F.); (E.M.); (G.C.); (E.S.); (F.M.P.); (E.G.-R.)
| | - Elena Guerini-Rocco
- Division of Pathology, European Institute of Oncology IRCCS, 20141 Milan, Italy; (M.I.); (C.P.); (K.V.); (C.F.); (E.M.); (G.C.); (E.S.); (F.M.P.); (E.G.-R.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Carmen Criscitiello
- Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology IRCCS, 20141 Milan, Italy; (D.T.); (C.C.); (G.C.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Giuseppe Curigliano
- Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology IRCCS, 20141 Milan, Italy; (D.T.); (C.C.); (G.C.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Nicola Fusco
- Division of Pathology, European Institute of Oncology IRCCS, 20141 Milan, Italy; (M.I.); (C.P.); (K.V.); (C.F.); (E.M.); (G.C.); (E.S.); (F.M.P.); (E.G.-R.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| |
Collapse
|
18
|
Safdar Ali Khan M, Husen A, Nisar S, Ahmed H, Shah Muhammad S, Aftab S. Offloading the computational complexity of transfer learning with generic features. PeerJ Comput Sci 2024; 10:e1938. [PMID: 38660182 PMCID: PMC11041970 DOI: 10.7717/peerj-cs.1938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 02/19/2024] [Indexed: 04/26/2024]
Abstract
Deep learning approaches are generally complex, requiring extensive computational resources and having high time complexity. Transfer learning is a state-of-the-art approach to reducing the requirements of high computational resources by using pre-trained models without compromising accuracy and performance. In conventional studies, pre-trained models are trained on datasets from different but similar domains with many domain-specific features. The computational requirements of transfer learning are directly dependent on the number of features that include the domain-specific and the generic features. This article investigates the prospects of reducing the computational requirements of the transfer learning models by discarding domain-specific features from a pre-trained model. The approach is applied to breast cancer detection using the dataset curated breast imaging subset of the digital database for screening mammography and various performance metrics such as precision, accuracy, recall, F1-score, and computational requirements. It is seen that discarding the domain-specific features to a specific limit provides significant performance improvements as well as minimizes the computational requirements in terms of training time (reduced by approx. 12%), processor utilization (reduced approx. 25%), and memory usage (reduced approx. 22%). The proposed transfer learning strategy increases accuracy (approx. 7%) and offloads computational complexity expeditiously.
Collapse
Affiliation(s)
- Muhammad Safdar Ali Khan
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
| | - Arif Husen
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
- Department of Computer Science, COMSATS Institute of Information Technology, Lahore, Punjab, Pakistan
| | - Shafaq Nisar
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
| | - Hasnain Ahmed
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
| | - Syed Shah Muhammad
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
| | - Shabib Aftab
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
| |
Collapse
|
19
|
Ortiz S, Rojas-Valenzuela I, Rojas F, Valenzuela O, Herrera LJ, Rojas I. Novel methodology for detecting and localizing cancer area in histopathological images based on overlapping patches. Comput Biol Med 2024; 168:107713. [PMID: 38000243 DOI: 10.1016/j.compbiomed.2023.107713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 11/07/2023] [Accepted: 11/15/2023] [Indexed: 11/26/2023]
Abstract
Cancer disease is one of the most important pathologies in the world, as it causes the death of millions of people, and the cure of this disease is limited in most cases. Rapid spread is one of the most important features of this disease, so many efforts are focused on its early-stage detection and localization. Medicine has made numerous advances in the recent decades with the help of artificial intelligence (AI), reducing costs and saving time. In this paper, deep learning models (DL) are used to present a novel method for detecting and localizing cancerous zones in WSI images, using tissue patch overlay to improve performance results. A novel overlapping methodology is proposed and discussed, together with different alternatives to evaluate the labels of the patches overlapping in the same zone to improve detection performance. The goal is to strengthen the labeling of different areas of an image with multiple overlapping patch testing. The results show that the proposed method improves the traditional framework and provides a different approach to cancer detection. The proposed method, based on applying 3x3 step 2 average pooling filters on overlapping patch labels, provides a better result with a 12.9% correction percentage for misclassified patches on the HUP dataset and 15.8% on the CINIJ dataset. In addition, a filter is implemented to correct isolated patches that were also misclassified. Finally, a CNN decision threshold study is performed to analyze the impact of the threshold value on the accuracy of the model. The alteration of the threshold decision along with the filter for isolated patches and the proposed method for overlapping patches, corrects about 20% of the patches that are mislabeled in the traditional method. As a whole, the proposed method achieves an accuracy rate of 94.6%. The code is available at https://github.com/sergioortiz26/Cancer_overlapping_filter_WSI_images.
Collapse
Affiliation(s)
- Sergio Ortiz
- Department of Computer Architecture and Technology, University of Granada, E.T.S. de Ingenierías Informática y de Telecomunicación, C/ Periodista Daniel Saucedo Aranda S/N CP:18071 Granada, Spain.
| | - Ignacio Rojas-Valenzuela
- Department of Computer Architecture and Technology, University of Granada, E.T.S. de Ingenierías Informática y de Telecomunicación, C/ Periodista Daniel Saucedo Aranda S/N CP:18071 Granada, Spain
| | - Fernando Rojas
- Department of Computer Architecture and Technology, University of Granada, E.T.S. de Ingenierías Informática y de Telecomunicación, C/ Periodista Daniel Saucedo Aranda S/N CP:18071 Granada, Spain
| | - Olga Valenzuela
- Department of Applied Mathematics, University of Granada, Facultad de Ciencias, Avenida de la Fuente Nueva S/N CP:18071 Granada, Spain
| | - Luis Javier Herrera
- Department of Computer Architecture and Technology, University of Granada, E.T.S. de Ingenierías Informática y de Telecomunicación, C/ Periodista Daniel Saucedo Aranda S/N CP:18071 Granada, Spain
| | - Ignacio Rojas
- Department of Computer Architecture and Technology, University of Granada, E.T.S. de Ingenierías Informática y de Telecomunicación, C/ Periodista Daniel Saucedo Aranda S/N CP:18071 Granada, Spain.
| |
Collapse
|
20
|
Liu L, Wang Y, Zhang P, Qiao H, Sun T, Zhang H, Xu X, Shang H. Collaborative Transfer Network for Multi-Classification of Breast Cancer Histopathological Images. IEEE J Biomed Health Inform 2024; 28:110-121. [PMID: 37294651 DOI: 10.1109/jbhi.2023.3283042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The incidence of breast cancer is increasing rapidly around the world. Accurate classification of the breast cancer subtype from hematoxylin and eosin images is the key to improve the precision of treatment. However, the high consistency of disease subtypes and uneven distribution of cancer cells seriously affect the performance of multi-classification methods. Furthermore, it is difficult to apply existing classification methods to multiple datasets. In this article, we propose a collaborative transfer network (CTransNet) for multi-classification of breast cancer histopathological images. CTransNet consists of a transfer learning backbone branch, a residual collaborative branch, and a feature fusion module. The transfer learning branch adopts the pre-trained DenseNet structure to extract image features from ImageNet. The residual branch extracts target features from pathological images in a collaborative manner. The feature fusion strategy of optimizing these two branches is used to train and fine-tune CTransNet. Experiments show that CTransNet achieves 98.29% classification accuracy on the public BreaKHis breast cancer dataset, exceeding the performance of state-of-the-art methods. Visual analysis is carried out under the guidance of oncologists. Based on the training parameters of the BreaKHis dataset, CTransNet achieves superior performance on other two public breast cancer datasets (breast-cancer-grade-ICT and ICIAR2018_BACH_Challenge), indicating that CTransNet has good generalization performance.
Collapse
|
21
|
Zaki M, Elallam O, Jami O, EL Ghoubali D, Jhilal F, Alidrissi N, Ghazal H, Habib N, Abbad F, Benmoussa A, Bakkali F. Advancing Tumor Cell Classification and Segmentation in Ki-67 Images: A Systematic Review of Deep Learning Approaches. LECTURE NOTES IN NETWORKS AND SYSTEMS 2024:94-112. [DOI: 10.1007/978-3-031-52385-4_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
|
22
|
Kaur A, Kaushal C, Sandhu JK, Damaševičius R, Thakur N. Histopathological Image Diagnosis for Breast Cancer Diagnosis Based on Deep Mutual Learning. Diagnostics (Basel) 2023; 14:95. [PMID: 38201406 PMCID: PMC10795733 DOI: 10.3390/diagnostics14010095] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 12/26/2023] [Accepted: 12/28/2023] [Indexed: 01/12/2024] Open
Abstract
Every year, millions of women across the globe are diagnosed with breast cancer (BC), an illness that is both common and potentially fatal. To provide effective therapy and enhance patient outcomes, it is essential to make an accurate diagnosis as soon as possible. In recent years, deep-learning (DL) approaches have shown great effectiveness in a variety of medical imaging applications, including the processing of histopathological images. Using DL techniques, the objective of this study is to recover the detection of BC by merging qualitative and quantitative data. Using deep mutual learning (DML), the emphasis of this research was on BC. In addition, a wide variety of breast cancer imaging modalities were investigated to assess the distinction between aggressive and benign BC. Based on this, deep convolutional neural networks (DCNNs) have been established to assess histopathological images of BC. In terms of the Break His-200×, BACH, and PUIH datasets, the results of the trials indicate that the level of accuracy achieved by the DML model is 98.97%, 96.78, and 96.34, respectively. This indicates that the DML model outperforms and has the greatest value among the other methodologies. To be more specific, it improves the results of localization without compromising the performance of the classification, which is an indication of its increased utility. We intend to proceed with the development of the diagnostic model to make it more applicable to clinical settings.
Collapse
Affiliation(s)
- Amandeep Kaur
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, India
| | - Chetna Kaushal
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, India
| | - Jasjeet Kaur Sandhu
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, India
| | - Robertas Damaševičius
- Department of Applied Informatics, Vytautas Magnus University, 53361 Akademija, Lithuania
| | - Neetika Thakur
- Junior Laboratory Technician, Postgraduate Institute of Medical Education and Research, Chandigarh 160012, India
| |
Collapse
|
23
|
Harrison P, Hasan R, Park K. State-of-the-Art of Breast Cancer Diagnosis in Medical Images via Convolutional Neural Networks (CNNs). JOURNAL OF HEALTHCARE INFORMATICS RESEARCH 2023; 7:387-432. [PMID: 37927373 PMCID: PMC10620373 DOI: 10.1007/s41666-023-00144-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Revised: 08/14/2023] [Accepted: 08/22/2023] [Indexed: 11/07/2023]
Abstract
Early detection of breast cancer is crucial for a better prognosis. Various studies have been conducted where tumor lesions are detected and localized on images. This is a narrative review where the studies reviewed are related to five different image modalities: histopathological, mammogram, magnetic resonance imaging (MRI), ultrasound, and computed tomography (CT) images, making it different from other review studies where fewer image modalities are reviewed. The goal is to have the necessary information, such as pre-processing techniques and CNN-based diagnosis techniques for the five modalities, readily available in one place for future studies. Each modality has pros and cons, such as mammograms might give a high false positive rate for radiographically dense breasts, while ultrasounds with low soft tissue contrast result in early-stage false detection, and MRI provides a three-dimensional volumetric image, but it is expensive and cannot be used as a routine test. Various studies were manually reviewed using particular inclusion and exclusion criteria; as a result, 91 recent studies that classify and detect tumor lesions on breast cancer images from 2017 to 2022 related to the five image modalities were included. For histopathological images, the maximum accuracy achieved was around 99 % , and the maximum sensitivity achieved was 97.29 % by using DenseNet, ResNet34, and ResNet50 architecture. For mammogram images, the maximum accuracy achieved was 96.52 % using a customized CNN architecture. For MRI, the maximum accuracy achieved was 98.33 % using customized CNN architecture. For ultrasound, the maximum accuracy achieved was around 99 % by using DarkNet-53, ResNet-50, G-CNN, and VGG. For CT, the maximum sensitivity achieved was 96 % by using Xception architecture. Histopathological and ultrasound images achieved higher accuracy of around 99 % by using ResNet34, ResNet50, DarkNet-53, G-CNN, and VGG compared to other modalities for either of the following reasons: use of pre-trained architectures with pre-processing techniques, use of modified architectures with pre-processing techniques, use of two-stage CNN, and higher number of studies available for Artificial Intelligence (AI)/machine learning (ML) researchers to reference. One of the gaps we found is that only a single image modality is used for CNN-based diagnosis; in the future, a multiple image modality approach can be used to design a CNN architecture with higher accuracy.
Collapse
Affiliation(s)
- Pratibha Harrison
- Department of Computer and Information Science, University of Massachusetts Dartmouth, 285 Old Westport Rd, North Dartmouth, 02747 MA USA
| | - Rakib Hasan
- Department of Mechanical Engineering, Khulna University of Engineering & Technology, PhulBari Gate, Khulna, 9203 Bangladesh
| | - Kihan Park
- Department of Mechanical Engineering, University of Massachusetts Dartmouth, 285 Old Westport Rd, North Dartmouth, 02747 MA USA
| |
Collapse
|
24
|
Morovati B, Lashgari R, Hajihasani M, Shabani H. Reduced Deep Convolutional Activation Features (R-DeCAF) in Histopathology Images to Improve the Classification Performance for Breast Cancer Diagnosis. J Digit Imaging 2023; 36:2602-2612. [PMID: 37532925 PMCID: PMC10584742 DOI: 10.1007/s10278-023-00887-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Revised: 07/19/2023] [Accepted: 07/20/2023] [Indexed: 08/04/2023] Open
Abstract
Breast cancer is the second most common cancer among women worldwide, and the diagnosis by pathologists is a time-consuming procedure and subjective. Computer-aided diagnosis frameworks are utilized to relieve pathologist workload by classifying the data automatically, in which deep convolutional neural networks (CNNs) are effective solutions. The features extracted from the activation layer of pre-trained CNNs are called deep convolutional activation features (DeCAF). In this paper, we have analyzed that all DeCAF features are not necessarily led to higher accuracy in the classification task and dimension reduction plays an important role. We have proposed reduced DeCAF (R-DeCAF) for this purpose, and different dimension reduction methods are applied to achieve an effective combination of features by capturing the essence of DeCAF features. This framework uses pre-trained CNNs such as AlexNet, VGG-16, and VGG-19 as feature extractors in transfer learning mode. The DeCAF features are extracted from the first fully connected layer of the mentioned CNNs, and a support vector machine is used for classification. Among linear and nonlinear dimensionality reduction algorithms, linear approaches such as principal component analysis (PCA) represent a better combination among deep features and lead to higher accuracy in the classification task using a small number of features considering a specific amount of cumulative explained variance (CEV) of features. The proposed method is validated using experimental BreakHis and ICIAR datasets. Comprehensive results show improvement in the classification accuracy up to 4.3% with a feature vector size (FVS) of 23 and CEV equal to 0.15.
Collapse
Affiliation(s)
- Bahareh Morovati
- Institute of Medical Science and Technology, Shahid Beheshti University, Tehran, Iran
| | - Reza Lashgari
- Institute of Medical Science and Technology, Shahid Beheshti University, Tehran, Iran
| | - Mojtaba Hajihasani
- Institute of Medical Science and Technology, Shahid Beheshti University, Tehran, Iran
| | - Hasti Shabani
- Institute of Medical Science and Technology, Shahid Beheshti University, Tehran, Iran.
| |
Collapse
|
25
|
Rai HM, Yoo J. A comprehensive analysis of recent advancements in cancer detection using machine learning and deep learning models for improved diagnostics. J Cancer Res Clin Oncol 2023; 149:14365-14408. [PMID: 37540254 DOI: 10.1007/s00432-023-05216-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Accepted: 07/26/2023] [Indexed: 08/05/2023]
Abstract
PURPOSE There are millions of people who lose their life due to several types of fatal diseases. Cancer is one of the most fatal diseases which may be due to obesity, alcohol consumption, infections, ultraviolet radiation, smoking, and unhealthy lifestyles. Cancer is abnormal and uncontrolled tissue growth inside the body which may be spread to other body parts other than where it has originated. Hence it is very much required to diagnose the cancer at an early stage to provide correct and timely treatment. Also, manual diagnosis and diagnostic error may cause of the death of many patients hence much research are going on for the automatic and accurate detection of cancer at early stage. METHODS In this paper, we have done the comparative analysis of the diagnosis and recent advancement for the detection of various cancer types using traditional machine learning (ML) and deep learning (DL) models. In this study, we have included four types of cancers, brain, lung, skin, and breast and their detection using ML and DL techniques. In extensive review we have included a total of 130 pieces of literature among which 56 are of ML-based and 74 are from DL-based cancer detection techniques. Only the peer reviewed research papers published in the recent 5-year span (2018-2023) have been included for the analysis based on the parameters, year of publication, feature utilized, best model, dataset/images utilized, and best accuracy. We have reviewed ML and DL-based techniques for cancer detection separately and included accuracy as the performance evaluation metrics to maintain the homogeneity while verifying the classifier efficiency. RESULTS Among all the reviewed literatures, DL techniques achieved the highest accuracy of 100%, while ML techniques achieved 99.89%. The lowest accuracy achieved using DL and ML approaches were 70% and 75.48%, respectively. The difference in accuracy between the highest and lowest performing models is about 28.8% for skin cancer detection. In addition, the key findings, and challenges for each type of cancer detection using ML and DL techniques have been presented. The comparative analysis between the best performing and worst performing models, along with overall key findings and challenges, has been provided for future research purposes. Although the analysis is based on accuracy as the performance metric and various parameters, the results demonstrate a significant scope for improvement in classification efficiency. CONCLUSION The paper concludes that both ML and DL techniques hold promise in the early detection of various cancer types. However, the study identifies specific challenges that need to be addressed for the widespread implementation of these techniques in clinical settings. The presented results offer valuable guidance for future research in cancer detection, emphasizing the need for continued advancements in ML and DL-based approaches to improve diagnostic accuracy and ultimately save more lives.
Collapse
Affiliation(s)
- Hari Mohan Rai
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-gu, Seongnam-si, 13120, Gyeonggi-do, Republic of Korea.
| | - Joon Yoo
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-gu, Seongnam-si, 13120, Gyeonggi-do, Republic of Korea
| |
Collapse
|
26
|
Hanna MG, Brogi E. Future Practices of Breast Pathology Using Digital and Computational Pathology. Adv Anat Pathol 2023; 30:421-433. [PMID: 37737690 DOI: 10.1097/pap.0000000000000414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/23/2023]
Abstract
Pathology clinical practice has evolved by adopting technological advancements initially regarded as potentially disruptive, such as electron microscopy, immunohistochemistry, and genomic sequencing. Breast pathology has a critical role as a medical domain, where the patient's pathology diagnosis has significant implications for prognostication and treatment of diseases. The advent of digital and computational pathology has brought about significant advancements in the field, offering new possibilities for enhancing diagnostic accuracy and improving patient care. Digital slide scanning enables to conversion of glass slides into high-fidelity digital images, supporting the review of cases in a digital workflow. Digitization offers the capability to render specimen diagnoses, digital archival of patient specimens, collaboration, and telepathology. Integration of image analysis and machine learning-based systems layered atop the high-resolution digital images offers novel workflows to assist breast pathologists in their clinical, educational, and research endeavors. Decision support tools may improve the detection and classification of breast lesions and the quantification of immunohistochemical studies. Computational biomarkers may help to contribute to patient management or outcomes. Furthermore, using digital and computational pathology may increase standardization and quality assurance, especially in areas with high interobserver variability. This review explores the current landscape and possible future applications of digital and computational techniques in the field of breast pathology.
Collapse
Affiliation(s)
- Matthew G Hanna
- Department of Pathology and Laboratory Medicine, Memorial Sloan Kettering Cancer Center, New York, NY
| | | |
Collapse
|
27
|
Al-Thelaya K, Gilal NU, Alzubaidi M, Majeed F, Agus M, Schneider J, Househ M. Applications of discriminative and deep learning feature extraction methods for whole slide image analysis: A survey. J Pathol Inform 2023; 14:100335. [PMID: 37928897 PMCID: PMC10622844 DOI: 10.1016/j.jpi.2023.100335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 07/17/2023] [Accepted: 07/19/2023] [Indexed: 11/07/2023] Open
Abstract
Digital pathology technologies, including whole slide imaging (WSI), have significantly improved modern clinical practices by facilitating storing, viewing, processing, and sharing digital scans of tissue glass slides. Researchers have proposed various artificial intelligence (AI) solutions for digital pathology applications, such as automated image analysis, to extract diagnostic information from WSI for improving pathology productivity, accuracy, and reproducibility. Feature extraction methods play a crucial role in transforming raw image data into meaningful representations for analysis, facilitating the characterization of tissue structures, cellular properties, and pathological patterns. These features have diverse applications in several digital pathology applications, such as cancer prognosis and diagnosis. Deep learning-based feature extraction methods have emerged as a promising approach to accurately represent WSI contents and have demonstrated superior performance in histology-related tasks. In this survey, we provide a comprehensive overview of feature extraction methods, including both manual and deep learning-based techniques, for the analysis of WSIs. We review relevant literature, analyze the discriminative and geometric features of WSIs (i.e., features suited to support the diagnostic process and extracted by "engineered" methods as opposed to AI), and explore predictive modeling techniques using AI and deep learning. This survey examines the advances, challenges, and opportunities in this rapidly evolving field, emphasizing the potential for accurate diagnosis, prognosis, and decision-making in digital pathology.
Collapse
Affiliation(s)
- Khaled Al-Thelaya
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Nauman Ullah Gilal
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mahmood Alzubaidi
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Fahad Majeed
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Marco Agus
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Jens Schneider
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mowafa Househ
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| |
Collapse
|
28
|
Tuerhong A, Silamujiang M, Xianmuxiding Y, Wu L, Mojarad M. An ensemble classifier method based on teaching-learning-based optimization for breast cancer diagnosis. J Cancer Res Clin Oncol 2023; 149:9337-9348. [PMID: 37202580 DOI: 10.1007/s00432-023-04861-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 05/13/2023] [Indexed: 05/20/2023]
Abstract
INTRODUCTION Epidemiological studies show that breast cancer is the most common cancer in women in the world. Breast cancer treatment can be very effective, especially when the disease is detected in the early stages. The goal can be achieved by using large-scale breast cancer data with the machine learning models METHODS: This paper proposes a new intelligent approach using an optimized ensemble classifier for breast cancer diagnosis. The classification is done by proposing a new intelligent Group Method of Data Handling (GMDH) neural network-based ensemble classifier. This method improves the performance of the machine learning technique by using a Teaching-Learning-Based Optimization (TLBO) algorithm to optimize the hyperparameters of the classifier. Meanwhile, we use TLBO as an evolutionary method to address the problem of appropriate feature selection in breast cancer data. RESULTS The simulation results show that the proposed method has a better accuracy between 7 and 26% compared to the best results of the existing equivalent algorithms. CONCLUSION According to the obtained results, we suggest the proposed algorithm as an intelligent medical assistant system for breast cancer diagnosis.
Collapse
Affiliation(s)
- Adila Tuerhong
- Department of Cardio-Oncology, Affiliated Tumor Hospital of Xinjiang Medical University, Urumqi, 830011, Xinjiang, China
| | - Mutalipu Silamujiang
- Department of Traumatic Orthopedic, The Sixth Affiliated Hospital of Xinjiang Medical University, Urumqi, 830002, Xinjiang, China
| | - Yilixiati Xianmuxiding
- Department of Emergency, Affiliated Tumor Hospital of Xinjiang Medical University, Urumqi, 830011, Xinjiang, China
| | - Li Wu
- Department of Cardio-Oncology, Affiliated Tumor Hospital of Xinjiang Medical University, Urumqi, 830011, Xinjiang, China.
| | - Musa Mojarad
- Department of Computer Engineering, Firoozabad Branch, Islamic Azad University, Firoozabad, Iran.
| |
Collapse
|
29
|
Rai HM. Cancer detection and segmentation using machine learning and deep learning techniques: a review. MULTIMEDIA TOOLS AND APPLICATIONS 2023; 83:27001-27035. [DOI: 10.1007/s11042-023-16520-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Revised: 05/12/2023] [Accepted: 08/13/2023] [Indexed: 09/16/2023]
|
30
|
TCNN: A Transformer Convolutional Neural Network for artifact classification in whole slide images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2023]
|
31
|
Ali MD, Saleem A, Elahi H, Khan MA, Khan MI, Yaqoob MM, Farooq Khattak U, Al-Rasheed A. Breast Cancer Classification through Meta-Learning Ensemble Technique Using Convolution Neural Networks. Diagnostics (Basel) 2023; 13:2242. [PMID: 37443636 PMCID: PMC10341268 DOI: 10.3390/diagnostics13132242] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 06/22/2023] [Accepted: 06/23/2023] [Indexed: 07/15/2023] Open
Abstract
This study aims to develop an efficient and accurate breast cancer classification model using meta-learning approaches and multiple convolutional neural networks. This Breast Ultrasound Images (BUSI) dataset contains various types of breast lesions. The goal is to classify these lesions as benign or malignant, which is crucial for the early detection and treatment of breast cancer. The problem is that traditional machine learning and deep learning approaches often fail to accurately classify these images due to their complex and diverse nature. In this research, to address this problem, the proposed model used several advanced techniques, including meta-learning ensemble technique, transfer learning, and data augmentation. Meta-learning will optimize the model's learning process, allowing it to adapt to new and unseen datasets quickly. Transfer learning will leverage the pre-trained models such as Inception, ResNet50, and DenseNet121 to enhance the model's feature extraction ability. Data augmentation techniques will be applied to artificially generate new training images, increasing the size and diversity of the dataset. Meta ensemble learning techniques will combine the outputs of multiple CNNs, improving the model's classification accuracy. The proposed work will be investigated by pre-processing the BUSI dataset first, then training and evaluating multiple CNNs using different architectures and pre-trained models. Then, a meta-learning algorithm will be applied to optimize the learning process, and ensemble learning will be used to combine the outputs of multiple CNN. Additionally, the evaluation results indicate that the model is highly effective with high accuracy. Finally, the proposed model's performance will be compared with state-of-the-art approaches in other existing systems' accuracy, precision, recall, and F1 score.
Collapse
Affiliation(s)
- Muhammad Danish Ali
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad 22060, Pakistan; (M.D.A.); (A.S.); (H.E.); (M.M.Y.)
| | - Adnan Saleem
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad 22060, Pakistan; (M.D.A.); (A.S.); (H.E.); (M.M.Y.)
| | - Hubaib Elahi
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad 22060, Pakistan; (M.D.A.); (A.S.); (H.E.); (M.M.Y.)
| | - Muhammad Amir Khan
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad 22060, Pakistan; (M.D.A.); (A.S.); (H.E.); (M.M.Y.)
- Faculty of Computer and Mathematical Sciences, Universiti Teknologi MARA, Shah Alam 40450, Malaysia
| | - Muhammad Ijaz Khan
- Institute of Computing and Information Technology, Gomal University, Dera Ismail Khan 29220, Pakistan;
| | - Muhammad Mateen Yaqoob
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad 22060, Pakistan; (M.D.A.); (A.S.); (H.E.); (M.M.Y.)
| | - Umar Farooq Khattak
- School of Information Technology, UNITAR International University, Kelana Jaya, Petaling Jaya 47301, Malaysia
| | - Amal Al-Rasheed
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 11671, Saudi Arabia;
| |
Collapse
|
32
|
Kode H, Barkana BD. Deep Learning- and Expert Knowledge-Based Feature Extraction and Performance Evaluation in Breast Histopathology Images. Cancers (Basel) 2023; 15:3075. [PMID: 37370687 DOI: 10.3390/cancers15123075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 05/31/2023] [Accepted: 06/03/2023] [Indexed: 06/29/2023] Open
Abstract
Cancer develops when a single or a group of cells grows and spreads uncontrollably. Histopathology images are used in cancer diagnosis since they show tissue and cell structures under a microscope. Knowledge-based and deep learning-based computer-aided detection is an ongoing research field in cancer diagnosis using histopathology images. Feature extraction is vital in both approaches since the feature set is fed to a classifier and determines the performance. This paper evaluates three feature extraction methods and their performance in breast cancer diagnosis. Features are extracted by (1) a Convolutional Neural Network, (2) a transfer learning architecture VGG16, and (3) a knowledge-based system. The feature sets are tested by seven classifiers, including Neural Network (64 units), Random Forest, Multilayer Perceptron, Decision Tree, Support Vector Machines, K-Nearest Neighbors, and Narrow Neural Network (10 units) on the BreakHis 400× image dataset. The CNN achieved up to 85% for the Neural Network and Random Forest, the VGG16 method achieved up to 86% for the Neural Network, and the knowledge-based features achieved up to 98% for Neural Network, Random Forest, Multilayer Perceptron classifiers.
Collapse
Affiliation(s)
- Hepseeba Kode
- Computer Science and Engineering Department, University of Bridgeport, Bridgeport, CT 06604, USA
| | - Buket D Barkana
- Electrical Engineering Department, University of Bridgeport, Bridgeport, CT 06604, USA
| |
Collapse
|
33
|
Yong MP, Hum YC, Lai KW, Lee YL, Goh CH, Yap WS, Tee YK. Histopathological Gastric Cancer Detection on GasHisSDB Dataset Using Deep Ensemble Learning. Diagnostics (Basel) 2023; 13:1793. [PMID: 37238277 PMCID: PMC10217020 DOI: 10.3390/diagnostics13101793] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 05/08/2023] [Accepted: 05/14/2023] [Indexed: 05/28/2023] Open
Abstract
Gastric cancer is a leading cause of cancer-related deaths worldwide, underscoring the need for early detection to improve patient survival rates. The current clinical gold standard for detection is histopathological image analysis, but this process is manual, laborious, and time-consuming. As a result, there has been growing interest in developing computer-aided diagnosis to assist pathologists. Deep learning has shown promise in this regard, but each model can only extract a limited number of image features for classification. To overcome this limitation and improve classification performance, this study proposes ensemble models that combine the decisions of several deep learning models. To evaluate the effectiveness of the proposed models, we tested their performance on the publicly available gastric cancer dataset, Gastric Histopathology Sub-size Image Database. Our experimental results showed that the top 5 ensemble model achieved state-of-the-art detection accuracy in all sub-databases, with the highest detection accuracy of 99.20% in the 160 × 160 pixels sub-database. These results demonstrated that ensemble models could extract important features from smaller patch sizes and achieve promising performance. Overall, our proposed work could assist pathologists in detecting gastric cancer through histopathological image analysis and contribute to early gastric cancer detection to improve patient survival rates.
Collapse
Affiliation(s)
- Ming Ping Yong
- Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Kajang 43000, Malaysia (Y.L.L.)
| | - Yan Chai Hum
- Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Kajang 43000, Malaysia (Y.L.L.)
| | - Khin Wee Lai
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur 50603, Malaysia
| | - Ying Loong Lee
- Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Kajang 43000, Malaysia (Y.L.L.)
| | - Choon-Hian Goh
- Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Kajang 43000, Malaysia (Y.L.L.)
| | - Wun-She Yap
- Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Kajang 43000, Malaysia (Y.L.L.)
| | - Yee Kai Tee
- Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Kajang 43000, Malaysia (Y.L.L.)
| |
Collapse
|
34
|
Mokoatle M, Marivate V, Mapiye D, Bornman R, Hayes VM. A review and comparative study of cancer detection using machine learning: SBERT and SimCSE application. BMC Bioinformatics 2023; 24:112. [PMID: 36959534 PMCID: PMC10037872 DOI: 10.1186/s12859-023-05235-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Accepted: 03/17/2023] [Indexed: 03/25/2023] Open
Abstract
BACKGROUND Using visual, biological, and electronic health records data as the sole input source, pretrained convolutional neural networks and conventional machine learning methods have been heavily employed for the identification of various malignancies. Initially, a series of preprocessing steps and image segmentation steps are performed to extract region of interest features from noisy features. Then, the extracted features are applied to several machine learning and deep learning methods for the detection of cancer. METHODS In this work, a review of all the methods that have been applied to develop machine learning algorithms that detect cancer is provided. With more than 100 types of cancer, this study only examines research on the four most common and prevalent cancers worldwide: lung, breast, prostate, and colorectal cancer. Next, by using state-of-the-art sentence transformers namely: SBERT (2019) and the unsupervised SimCSE (2021), this study proposes a new methodology for detecting cancer. This method requires raw DNA sequences of matched tumor/normal pair as the only input. The learnt DNA representations retrieved from SBERT and SimCSE will then be sent to machine learning algorithms (XGBoost, Random Forest, LightGBM, and CNNs) for classification. As far as we are aware, SBERT and SimCSE transformers have not been applied to represent DNA sequences in cancer detection settings. RESULTS The XGBoost model, which had the highest overall accuracy of 73 ± 0.13 % using SBERT embeddings and 75 ± 0.12 % using SimCSE embeddings, was the best performing classifier. In light of these findings, it can be concluded that incorporating sentence representations from SimCSE's sentence transformer only marginally improved the performance of machine learning models.
Collapse
Affiliation(s)
- Mpho Mokoatle
- Department of Computer Science, University of Pretoria, Pretoria, South Africa.
| | - Vukosi Marivate
- Department of Computer Science, University of Pretoria, Pretoria, South Africa
| | | | - Riana Bornman
- School of Health Systems and Public Health, University of Pretoria, Pretoria, South Africa
| | - Vanessa M Hayes
- School of Medical Sciences, The University of Sydney, Sydney, Australia
- School of Health Systems and Public Health, University of Pretoria, Pretoria, South Africa
| |
Collapse
|
35
|
Garg S, Singh P. Transfer Learning Based Lightweight Ensemble Model for Imbalanced Breast Cancer Classification. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:1529-1539. [PMID: 35536810 DOI: 10.1109/tcbb.2022.3174091] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Automated classification of breast cancer can often save lives, as manual detection is usually time-consuming & expensive. Since the last decade, deep learning techniques have been most widely used for the automatic classification of breast cancer using histopathology images. This paper has performed the binary and multi-class classification of breast cancer using a transfer learning-based ensemble model. To analyze the correctness and reliability of the proposed model, we have used an imbalance IDC dataset, an imbalance BreakHis dataset in the binary class scenario, and a balanced BACH dataset for the multi-class classification. A lightweight shallow CNN model with batch normalization technology to accelerate convergence is aggregated with lightweight MobileNetV2 to improve learning and adaptability. The aggregation output is fed into a multilayer perceptron to complete the final classification task. The experimental study on all three datasets was performed and compared with the recent works. We have fine-tuned three different pre-trained models (ResNet50, InceptionV4, and MobilNetV2) and compared it with the proposed lightweight ensemble model in terms of execution time, number of parameters, model size, etc. In both the evaluation phases, it is seen that our model outperforms in all three datasets.
Collapse
|
36
|
Yusoff M, Haryanto T, Suhartanto H, Mustafa WA, Zain JM, Kusmardi K. Accuracy Analysis of Deep Learning Methods in Breast Cancer Classification: A Structured Review. Diagnostics (Basel) 2023; 13:diagnostics13040683. [PMID: 36832171 PMCID: PMC9955565 DOI: 10.3390/diagnostics13040683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 02/06/2023] [Accepted: 02/07/2023] [Indexed: 02/17/2023] Open
Abstract
Breast cancer is diagnosed using histopathological imaging. This task is extremely time-consuming due to high image complexity and volume. However, it is important to facilitate the early detection of breast cancer for medical intervention. Deep learning (DL) has become popular in medical imaging solutions and has demonstrated various levels of performance in diagnosing cancerous images. Nonetheless, achieving high precision while minimizing overfitting remains a significant challenge for classification solutions. The handling of imbalanced data and incorrect labeling is a further concern. Additional methods, such as pre-processing, ensemble, and normalization techniques, have been established to enhance image characteristics. These methods could influence classification solutions and be used to overcome overfitting and data balancing issues. Hence, developing a more sophisticated DL variant could improve classification accuracy while reducing overfitting. Technological advancements in DL have fueled automated breast cancer diagnosis growth in recent years. This paper reviewed studies on the capability of DL to classify histopathological breast cancer images, as the objective of this study was to systematically review and analyze current research on the classification of histopathological images. Additionally, literature from the Scopus and Web of Science (WOS) indexes was reviewed. This study assessed recent approaches for histopathological breast cancer image classification in DL applications for papers published up until November 2022. The findings of this study suggest that DL methods, especially convolution neural networks and their hybrids, are the most cutting-edge approaches currently in use. To find a new technique, it is necessary first to survey the landscape of existing DL approaches and their hybrid methods to conduct comparisons and case studies.
Collapse
Affiliation(s)
- Marina Yusoff
- Institute for Big Data Analytics and Artificial Intelligence (IBDAAI), Kompleks Al-Khawarizmi, Universiti Teknologi MARA (UiTM), Shah Alam 40450, Selangor, Malaysia
- College of Computing, Informatic and Media, Kompleks Al-Khawarizmi, Universiti Teknologi MARA (UiTM), Shah Alam 40450, Selangor, Malaysia
- Correspondence: (M.Y.); (W.A.M.)
| | - Toto Haryanto
- Department of Computer Science, IPB University, Bogor 16680, Indonesia
| | - Heru Suhartanto
- Faculty of Computer Science, Universitas Indonesia, Depok 16424, Indonesia
| | - Wan Azani Mustafa
- Faculty of Electrical Engineering Technology, Universiti Malaysia Perlis, UniCITI Alam Campus, Sungai Chuchuh, Padang Besar 02100, Perlis, Malaysia
- Correspondence: (M.Y.); (W.A.M.)
| | - Jasni Mohamad Zain
- Institute for Big Data Analytics and Artificial Intelligence (IBDAAI), Kompleks Al-Khawarizmi, Universiti Teknologi MARA (UiTM), Shah Alam 40450, Selangor, Malaysia
- College of Computing, Informatic and Media, Kompleks Al-Khawarizmi, Universiti Teknologi MARA (UiTM), Shah Alam 40450, Selangor, Malaysia
| | - Kusmardi Kusmardi
- Department of Anatomical Pathology, Faculty of Medicine, Universitas Indonesia/Cipto Mangunkusumo Hospital, Jakarta 10430, Indonesia
- Human Cancer Research Cluster, Indonesia Medical Education and Research Institute, Universitas Indonesia, Jakarta 10430, Indonesia
| |
Collapse
|
37
|
Fasihi Shirehjini O, Babapour Mofrad F, Shahmohammadi M, Karami F. Grading of gliomas using transfer learning on MRI images. MAGMA (NEW YORK, N.Y.) 2023; 36:43-53. [PMID: 36326937 DOI: 10.1007/s10334-022-01046-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 09/04/2022] [Accepted: 10/17/2022] [Indexed: 11/06/2022]
Abstract
OBJECTIVE Despite the critical role of Magnetic Resonance Imaging (MRI) in the diagnosis of brain tumours, there are still many pitfalls in the exact grading of them, in particular, gliomas. In this regard, it was aimed to examine the potential of Transfer Learning (TL) and Machine Learning (ML) algorithms in the accurate grading of gliomas on MRI images. MATERIALS AND METHODS Dataset has included four types of axial MRI images of glioma brain tumours with grades I-IV: T1-weighted, T2-weighted, FLAIR, and T1-weighted Contrast-Enhanced (T1-CE). Images were resized, normalized, and randomly split into training, validation, and test sets. ImageNet pre-trained Convolutional Neural Networks (CNNs) were utilized for feature extraction and classification, using Adam and SGD optimizers. Logistic Regression (LR) and Support Vector Machine (SVM) methods were also implemented for classification instead of Fully Connected (FC) layers taking advantage of features extracted by each CNN. RESULTS Evaluation metrics were computed to find the model with the best performance, and the highest overall accuracy of 99.38% was achieved for the model containing an SVM classifier and features extracted by pre-trained VGG-16. DISCUSSION It was demonstrated that developing Computer-aided Diagnosis (CAD) systems using pre-trained CNNs and classification algorithms is a functional approach to automatically specify the grade of glioma brain tumours in MRI images. Using these models is an excellent alternative to invasive methods and helps doctors diagnose more accurately before treatment.
Collapse
Affiliation(s)
- Oktay Fasihi Shirehjini
- Department of Medical Radiation Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Farshid Babapour Mofrad
- Department of Medical Radiation Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran.
| | - Mohammadreza Shahmohammadi
- Functional Neurosurgery Research Center, Shohada Tajrish Comprehensive Neurosurgical Center of Excellence, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Fatemeh Karami
- Department of Medical Genetics, Applied Biophotonics Research Center, Science and Research Branch, Islamic Azad University, Tehran, Iran
| |
Collapse
|
38
|
Afriyie Y, Weyori BA, Opoku AA. A scaling up approach: a research agenda for medical imaging analysis with applications in deep learning. J EXP THEOR ARTIF IN 2023. [DOI: 10.1080/0952813x.2023.2165721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Affiliation(s)
- Yaw Afriyie
- Department of Computer Science and Informatics, University of Energy and Natural Resources, School of Sciences, Sunyani, Ghana
- Department of Computer Science, Faculty of Information and Communication Technology, SD Dombo University of Business and Integrated Development Studies, Wa, Ghana
| | - Benjamin A. Weyori
- Department of Computer Science and Informatics, University of Energy and Natural Resources, School of Sciences, Sunyani, Ghana
| | - Alex A. Opoku
- Department of Mathematics & Statistics, University of Energy and Natural Resources, School of Sciences, Sunyani, Ghana
| |
Collapse
|
39
|
Breast cancer classification by a new approach to assessing deep neural network-based uncertainty quantification methods. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104057] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
40
|
Chan RC, To CKC, Cheng KCT, Yoshikazu T, Yan LLA, Tse GM. Artificial intelligence in breast cancer histopathology. Histopathology 2023; 82:198-210. [PMID: 36482271 DOI: 10.1111/his.14820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 09/22/2022] [Accepted: 09/28/2022] [Indexed: 12/13/2022]
Abstract
This is a review on the use of artificial intelligence for digital breast pathology. A systematic search on PubMed was conducted, identifying 17,324 research papers related to breast cancer pathology. Following a semimanual screening, 664 papers were retrieved and pursued. The papers are grouped into six major tasks performed by pathologists-namely, molecular and hormonal analysis, grading, mitotic figure counting, ki-67 indexing, tumour-infiltrating lymphocyte assessment, and lymph node metastases identification. Under each task, open-source datasets for research to build artificial intelligence (AI) tools are also listed. Many AI tools showed promise and demonstrated feasibility in the automation of routine pathology investigations. We expect continued growth of AI in this field as new algorithms mature.
Collapse
Affiliation(s)
- Ronald Ck Chan
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Chun Kit Curtis To
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Ka Chuen Tom Cheng
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Tada Yoshikazu
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Lai Ling Amy Yan
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Gary M Tse
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| |
Collapse
|
41
|
Duenweg SR, Brehler M, Bobholz SA, Lowman AK, Winiarz A, Kyereme F, Nencka A, Iczkowski KA, LaViolette PS. Comparison of a machine and deep learning model for automated tumor annotation on digitized whole slide prostate cancer histology. PLoS One 2023; 18:e0278084. [PMID: 36928230 PMCID: PMC10019669 DOI: 10.1371/journal.pone.0278084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 03/04/2023] [Indexed: 03/18/2023] Open
Abstract
One in eight men will be affected by prostate cancer (PCa) in their lives. While the current clinical standard prognostic marker for PCa is the Gleason score, it is subject to inter-reviewer variability. This study compares two machine learning methods for discriminating between cancerous regions on digitized histology from 47 PCa patients. Whole-slide images were annotated by a GU fellowship-trained pathologist for each Gleason pattern. High-resolution tiles were extracted from annotated and unlabeled tissue. Patients were separated into a training set of 31 patients (Cohort A, n = 9345 tiles) and a testing cohort of 16 patients (Cohort B, n = 4375 tiles). Tiles from Cohort A were used to train a ResNet model, and glands from these tiles were segmented to calculate pathomic features to train a bagged ensemble model to discriminate tumors as (1) cancer and noncancer, (2) high- and low-grade cancer from noncancer, and (3) all Gleason patterns. The outputs of these models were compared to ground-truth pathologist annotations. The ensemble and ResNet models had overall accuracies of 89% and 88%, respectively, at predicting cancer from noncancer. The ResNet model was additionally able to differentiate Gleason patterns on data from Cohort B while the ensemble model was not. Our results suggest that quantitative pathomic features calculated from PCa histology can distinguish regions of cancer; however, texture features captured by deep learning frameworks better differentiate unique Gleason patterns.
Collapse
Affiliation(s)
- Savannah R Duenweg
- Department of Biophysics, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
| | - Michael Brehler
- Department of Radiology, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
| | - Samuel A Bobholz
- Department of Biophysics, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
| | - Allison K Lowman
- Department of Radiology, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
| | - Aleksandra Winiarz
- Department of Biophysics, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
| | - Fitzgerald Kyereme
- Department of Radiology, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
| | - Andrew Nencka
- Department of Radiology, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
| | - Kenneth A Iczkowski
- Department of Pathology, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
| | - Peter S LaViolette
- Department of Radiology, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
- Department of Biomedical Engineering, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
| |
Collapse
|
42
|
Saini M, Susan S. VGGIN-Net: Deep Transfer Network for Imbalanced Breast Cancer Dataset. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:752-762. [PMID: 35349449 DOI: 10.1109/tcbb.2022.3163277] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
In this paper, we have presented a novel deep neural network architecture involving transfer learning approach, formed by freezing and concatenating all the layers till block4 pool layer of VGG16 pre-trained model (at the lower level) with the layers of a randomly initialized naïve Inception block module (at the higher level). Further, we have added the batch normalization, flatten, dropout and dense layers in the proposed architecture. Our transfer network, called VGGIN-Net, facilitates the transfer of domain knowledge from the larger ImageNet object dataset to the smaller imbalanced breast cancer dataset. To improve the performance of the proposed model, regularization was used in the form of dropout and data augmentation. A detailed block-wise fine tuning has been conducted on the proposed deep transfer network for images of different magnification factors. The results of extensive experiments indicate a significant improvement of classification performance after the application of fine-tuning. The proposed deep learning architecture with transfer learning and fine-tuning yields the highest accuracies in comparison to other state-of-the-art approaches for the classification of BreakHis breast cancer dataset. The articulated architecture is designed in a way that it can be effectively transfer learned on other breast cancer datasets.
Collapse
|
43
|
The Systematic Review of Artificial Intelligence Applications in Breast Cancer Diagnosis. Diagnostics (Basel) 2022; 13:diagnostics13010045. [PMID: 36611337 PMCID: PMC9818874 DOI: 10.3390/diagnostics13010045] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 12/16/2022] [Accepted: 12/17/2022] [Indexed: 12/28/2022] Open
Abstract
Several studies have demonstrated the value of artificial intelligence (AI) applications in breast cancer diagnosis. The systematic review of AI applications in breast cancer diagnosis includes several studies that compare breast cancer diagnosis and AI. However, they lack systematization, and each study appears to be conducted uniquely. The purpose and contributions of this study are to offer elaborative knowledge on the applications of AI in the diagnosis of breast cancer through citation analysis in order to categorize the main area of specialization that attracts the attention of the academic community, as well as thematic issue analysis to identify the species being researched in each category. In this study, a total number of 17,900 studies addressing breast cancer and AI published between 2012 and 2022 were obtained from these databases: IEEE, Embase: Excerpta Medica Database Guide-Ovid, PubMed, Springer, Web of Science, and Google Scholar. We applied inclusion and exclusion criteria to the search; 36 studies were identified. The vast majority of AI applications used classification models for the prediction of breast cancer. Howbeit, accuracy (99%) has the highest number of performance metrics, followed by specificity (98%) and area under the curve (0.95). Additionally, the Convolutional Neural Network (CNN) was the best model of choice in several studies. This study shows that the quantity and caliber of studies that use AI applications in breast cancer diagnosis will continue to rise annually. As a result, AI-based applications are viewed as a supplement to doctors' clinical reasoning, with the ultimate goal of providing quality healthcare that is both affordable and accessible to everyone worldwide.
Collapse
|
44
|
Zhao Y, Zhang J, Hu D, Qu H, Tian Y, Cui X. Application of Deep Learning in Histopathology Images of Breast Cancer: A Review. MICROMACHINES 2022; 13:2197. [PMID: 36557496 PMCID: PMC9781697 DOI: 10.3390/mi13122197] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 12/04/2022] [Accepted: 12/09/2022] [Indexed: 06/17/2023]
Abstract
With the development of artificial intelligence technology and computer hardware functions, deep learning algorithms have become a powerful auxiliary tool for medical image analysis. This study was an attempt to use statistical methods to analyze studies related to the detection, segmentation, and classification of breast cancer in pathological images. After an analysis of 107 articles on the application of deep learning to pathological images of breast cancer, this study is divided into three directions based on the types of results they report: detection, segmentation, and classification. We introduced and analyzed models that performed well in these three directions and summarized the related work from recent years. Based on the results obtained, the significant ability of deep learning in the application of breast cancer pathological images can be recognized. Furthermore, in the classification and detection of pathological images of breast cancer, the accuracy of deep learning algorithms has surpassed that of pathologists in certain circumstances. Our study provides a comprehensive review of the development of breast cancer pathological imaging-related research and provides reliable recommendations for the structure of deep learning network models in different application scenarios.
Collapse
Affiliation(s)
- Yue Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110169, China
- Key Laboratory of Data Analytics and Optimization for Smart Industry, Northeastern University, Shenyang 110169, China
| | - Jie Zhang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Dayu Hu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Hui Qu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Ye Tian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Xiaoyu Cui
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110169, China
- Key Laboratory of Data Analytics and Optimization for Smart Industry, Northeastern University, Shenyang 110169, China
| |
Collapse
|
45
|
Madani M, Behzadi MM, Nabavi S. The Role of Deep Learning in Advancing Breast Cancer Detection Using Different Imaging Modalities: A Systematic Review. Cancers (Basel) 2022; 14:5334. [PMID: 36358753 PMCID: PMC9655692 DOI: 10.3390/cancers14215334] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 10/23/2022] [Accepted: 10/25/2022] [Indexed: 12/02/2022] Open
Abstract
Breast cancer is among the most common and fatal diseases for women, and no permanent treatment has been discovered. Thus, early detection is a crucial step to control and cure breast cancer that can save the lives of millions of women. For example, in 2020, more than 65% of breast cancer patients were diagnosed in an early stage of cancer, from which all survived. Although early detection is the most effective approach for cancer treatment, breast cancer screening conducted by radiologists is very expensive and time-consuming. More importantly, conventional methods of analyzing breast cancer images suffer from high false-detection rates. Different breast cancer imaging modalities are used to extract and analyze the key features affecting the diagnosis and treatment of breast cancer. These imaging modalities can be divided into subgroups such as mammograms, ultrasound, magnetic resonance imaging, histopathological images, or any combination of them. Radiologists or pathologists analyze images produced by these methods manually, which leads to an increase in the risk of wrong decisions for cancer detection. Thus, the utilization of new automatic methods to analyze all kinds of breast screening images to assist radiologists to interpret images is required. Recently, artificial intelligence (AI) has been widely utilized to automatically improve the early detection and treatment of different types of cancer, specifically breast cancer, thereby enhancing the survival chance of patients. Advances in AI algorithms, such as deep learning, and the availability of datasets obtained from various imaging modalities have opened an opportunity to surpass the limitations of current breast cancer analysis methods. In this article, we first review breast cancer imaging modalities, and their strengths and limitations. Then, we explore and summarize the most recent studies that employed AI in breast cancer detection using various breast imaging modalities. In addition, we report available datasets on the breast-cancer imaging modalities which are important in developing AI-based algorithms and training deep learning models. In conclusion, this review paper tries to provide a comprehensive resource to help researchers working in breast cancer imaging analysis.
Collapse
Affiliation(s)
- Mohammad Madani
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Mohammad Mahdi Behzadi
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| |
Collapse
|
46
|
Brancati N, Anniciello AM, Pati P, Riccio D, Scognamiglio G, Jaume G, De Pietro G, Di Bonito M, Foncubierta A, Botti G, Gabrani M, Feroce F, Frucci M. BRACS: A Dataset for BReAst Carcinoma Subtyping in H&E Histology Images. Database (Oxford) 2022; 2022:6762252. [PMID: 36251776 PMCID: PMC9575967 DOI: 10.1093/database/baac093] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 09/16/2022] [Accepted: 10/01/2022] [Indexed: 11/11/2022]
Abstract
Breast cancer is the most commonly diagnosed cancer and registers the highest number of deaths for women. Advances in diagnostic activities combined with large-scale screening policies have significantly lowered the mortality rates for breast cancer patients. However, the manual inspection of tissue slides by pathologists is cumbersome, time-consuming and is subject to significant inter- and intra-observer variability. Recently, the advent of whole-slide scanning systems has empowered the rapid digitization of pathology slides and enabled the development of Artificial Intelligence (AI)-assisted digital workflows. However, AI techniques, especially Deep Learning, require a large amount of high-quality annotated data to learn from. Constructing such task-specific datasets poses several challenges, such as data-acquisition level constraints, time-consuming and expensive annotations and anonymization of patient information. In this paper, we introduce the BReAst Carcinoma Subtyping (BRACS) dataset, a large cohort of annotated Hematoxylin and Eosin (H&E)-stained images to advance AI development in the automatic characterization of breast lesions. BRACS contains 547 Whole-Slide Images (WSIs) and 4539 Regions Of Interest (ROIs) extracted from the WSIs. Each WSI and respective ROIs are annotated by the consensus of three board-certified pathologists into different lesion categories. Specifically, BRACS includes three lesion types, i.e., benign, malignant and atypical, which are further subtyped into seven categories. It is, to the best of our knowledge, the largest annotated dataset for breast cancer subtyping both at WSI and ROI levels. Furthermore, by including the understudied atypical lesions, BRACS offers a unique opportunity for leveraging AI to better understand their characteristics. We encourage AI practitioners to develop and evaluate novel algorithms on the BRACS dataset to further breast cancer diagnosis and patient care. Database URL: https://www.bracs.icar.cnr.it/
Collapse
|
47
|
Mukhlif AA, Al-Khateeb B, Mohammed MA. An extensive review of state-of-the-art transfer learning techniques used in medical imaging: Open issues and challenges. JOURNAL OF INTELLIGENT SYSTEMS 2022. [DOI: 10.1515/jisys-2022-0198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
Deep learning techniques, which use a massive technology known as convolutional neural networks, have shown excellent results in a variety of areas, including image processing and interpretation. However, as the depth of these networks grows, so does the demand for a large amount of labeled data required to train these networks. In particular, the medical field suffers from a lack of images because the procedure for obtaining labeled medical images in the healthcare field is difficult, expensive, and requires specialized expertise to add labels to images. Moreover, the process may be prone to errors and time-consuming. Current research has revealed transfer learning as a viable solution to this problem. Transfer learning allows us to transfer knowledge gained from a previous process to improve and tackle a new problem. This study aims to conduct a comprehensive survey of recent studies that dealt with solving this problem and the most important metrics used to evaluate these methods. In addition, this study identifies problems in transfer learning techniques and highlights the problems of the medical dataset and potential problems that can be addressed in future research. According to our review, many researchers use pre-trained models on the Imagenet dataset (VGG16, ResNet, Inception v3) in many applications such as skin cancer, breast cancer, and diabetic retinopathy classification tasks. These techniques require further investigation of these models, due to training them on natural, non-medical images. In addition, many researchers use data augmentation techniques to expand their dataset and avoid overfitting. However, not enough studies have shown the effect of performance with or without data augmentation. Accuracy, recall, precision, F1 score, receiver operator characteristic curve, and area under the curve (AUC) were the most widely used measures in these studies. Furthermore, we identified problems in the datasets for melanoma and breast cancer and suggested corresponding solutions.
Collapse
Affiliation(s)
- Abdulrahman Abbas Mukhlif
- Computer Science Department, College of Computer Science and Information Technology, University of Anbar , 31001 , Ramadi , Anbar , Iraq
| | - Belal Al-Khateeb
- Computer Science Department, College of Computer Science and Information Technology, University of Anbar , 31001 , Ramadi , Anbar , Iraq
| | - Mazin Abed Mohammed
- Computer Science Department, College of Computer Science and Information Technology, University of Anbar , 31001 , Ramadi , Anbar , Iraq
| |
Collapse
|
48
|
Deep and dense convolutional neural network for multi category classification of magnification specific and magnification independent breast cancer histopathological images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
49
|
Aljuaid H, Alturki N, Alsubaie N, Cavallaro L, Liotta A. Computer-aided diagnosis for breast cancer classification using deep neural networks and transfer learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 223:106951. [PMID: 35767911 DOI: 10.1016/j.cmpb.2022.106951] [Citation(s) in RCA: 54] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 05/25/2022] [Accepted: 06/09/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Many developed and non-developed countries worldwide suffer from cancer-related fatal diseases. In particular, the rate of breast cancer in females increases daily, partially due to unawareness and undiagnosed at the early stages. A proper first breast cancer treatment can only be provided by adequately detecting and classifying cancer during the very early stages of its development. The use of medical image analysis techniques and computer-aided diagnosis may help the acceleration and the automation of both cancer detection and classification by also training and aiding less experienced physicians. For large datasets of medical images, convolutional neural networks play a significant role in detecting and classifying cancer effectively. METHODS This article presents a novel computer-aided diagnosis method for breast cancer classification (both binary and multi-class), using a combination of deep neural networks (ResNet 18, ShuffleNet, and Inception-V3Net) and transfer learning on the BreakHis publicly available dataset. RESULTS AND CONCLUSIONS Our proposed method provides the best average accuracy for binary classification of benign or malignant cancer cases of 99.7%, 97.66%, and 96.94% for ResNet, InceptionV3Net, and ShuffleNet, respectively. Average accuracies for multi-class classification were 97.81%, 96.07%, and 95.79% for ResNet, Inception-V3Net, and ShuffleNet, respectively.
Collapse
Affiliation(s)
- Hanan Aljuaid
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), PO Box 84428, Riyadh 11671, Saudi Arabia
| | - Nazik Alturki
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), PO Box 84428, Riyadh 11671, Saudi Arabia
| | - Najah Alsubaie
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), PO Box 84428, Riyadh 11671, Saudi Arabia
| | - Lucia Cavallaro
- Faculty of Computer Science, Free University of Bozen-Bolzano, Piazza Domenicani, 3, Bolzano 39100, Italy
| | - Antonio Liotta
- Faculty of Computer Science, Free University of Bozen-Bolzano, Piazza Domenicani, 3, Bolzano 39100, Italy.
| |
Collapse
|
50
|
Guan X, Lu N, Zhang J. Evaluation of Epidermal Growth Factor Receptor 2 Status in Gastric Cancer by CT-Based Deep Learning Radiomics Nomogram. Front Oncol 2022; 12:905203. [PMID: 35898877 PMCID: PMC9309372 DOI: 10.3389/fonc.2022.905203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Accepted: 06/21/2022] [Indexed: 11/24/2022] Open
Abstract
Purpose To explore the role of computed tomography (CT)-based deep learning and radiomics in preoperative evaluation of epidermal growth factor receptor 2 (HER2) status in gastric cancer. Materials and methods The clinical data on gastric cancer patients were evaluated retrospectively, and 357 patients were chosen for this study (training cohort: 249; test cohort: 108). The preprocessed enhanced CT arterial phase images were selected for lesion segmentation, radiomics and deep learning feature extraction. We integrated deep learning features and radiomic features (Inte). Four methods were used for feature selection. We constructed models with support vector machine (SVM) or random forest (RF), respectively. The area under the receiver operating characteristics curve (AUC) was used to assess the performance of these models. We also constructed a nomogram including Inte-feature scores and clinical factors. Results The radiomics-SVM model showed good classification performance (AUC, training cohort: 0.8069; test cohort: 0.7869). The AUC of the ResNet50-SVM model and the Inte-SVM model in the test cohort were 0.8955 and 0.9055. The nomogram also showed excellent discrimination achieving greater AUC (training cohort, 0.9207; test cohort, 0.9224). Conclusion CT-based deep learning radiomics nomogram can accurately and effectively assess the HER2 status in patients with gastric cancer before surgery and it is expected to assist physicians in clinical decision-making and facilitates individualized treatment planning.
Collapse
Affiliation(s)
- Xiao Guan
- Department of General Surgery, The Second Affiliated Hospital of Nanjing Medical University, Nanjing Medical University, Nanjing, China
| | - Na Lu
- Department of General Surgery, The Second Affiliated Hospital of Nanjing Medical University, Nanjing Medical University, Nanjing, China
| | | |
Collapse
|