1
|
Singh R, Khan A, Seneviratne L, Hussain I. Deep learning approach for detecting tomato flowers and buds in greenhouses on 3P2R gantry robot. Sci Rep 2024; 14:20552. [PMID: 39232065 PMCID: PMC11374987 DOI: 10.1038/s41598-024-71013-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2024] [Accepted: 08/23/2024] [Indexed: 09/06/2024] Open
Abstract
In recent years, significant advancements have been made in the field of smart greenhouses, particularly in the application of computer vision and robotics for pollinating flowers. Robotic pollination offers several benefits, including reduced labor requirements and preservation of costly pollen through artificial tomato pollination. However, previous studies have primarily focused on the labeling and detection of tomato flowers alone. Therefore, the objective of this study was to develop a comprehensive methodology for simultaneously labeling, training, and detecting tomato flowers specifically tailored for robotic pollination. To achieve this, transfer learning techniques were employed using well-known models, namely YOLOv5 and the recently introduced YOLOv8, for tomato flower detection. The performance of both models was evaluated using the same image dataset, and a comparison was made based on their Average Precision (AP) scores to determine the superior model. The results indicated that YOLOv8 achieved a higher mean AP (mAP) of 92.6% in tomato flower and bud detection, outperforming YOLOv5 with 91.2%. Notably, YOLOv8 also demonstrated an inference speed of 0.7 ms when considering an image size of 1920 × 1080 pixels resized to 640 × 640 pixels during detection. The image dataset was acquired during both morning and evening periods to minimize the impact of lighting conditions on the detection model. These findings highlight the potential of YOLOv8 for real-time detection of tomato flowers and buds, enabling further estimation of flower blooming peaks and facilitating robotic pollination. In the context of robotic pollination, the study also focuses on the deployment of the proposed detection model on the 3P2R gantry robot. The study introduces a kinematic model and a modified circuit for the gantry robot. The position-based visual servoing method is employed to approach the detected flower during the pollination process. The effectiveness of the proposed visual servoing approach is validated in both un-clustered and clustered plant environments in the laboratory setting. Additionally, this study provides valuable theoretical and practical insights for specialists in the field of greenhouse systems, particularly in the design of flower detection algorithms using computer vision and its deployment in robotic systems used in greenhouses.
Collapse
Affiliation(s)
- Rajmeet Singh
- Department of Mechanical Engineering, Khalifa University, Abu Dhabi, United Arab Emirates
- Khalifa University Center for Robotics and Autonomous Systems (KUCARS), Khalifa University, Abu Dhabi, United Arab Emirates
| | - Asim Khan
- Khalifa University Center for Robotics and Autonomous Systems (KUCARS), Khalifa University, Abu Dhabi, United Arab Emirates
| | - Lakmal Seneviratne
- Department of Mechanical Engineering, Khalifa University, Abu Dhabi, United Arab Emirates
- Khalifa University Center for Robotics and Autonomous Systems (KUCARS), Khalifa University, Abu Dhabi, United Arab Emirates
| | - Irfan Hussain
- Department of Mechanical Engineering, Khalifa University, Abu Dhabi, United Arab Emirates.
- Khalifa University Center for Robotics and Autonomous Systems (KUCARS), Khalifa University, Abu Dhabi, United Arab Emirates.
| |
Collapse
|
2
|
Kalpana P, Anandan R, Hussien AG, Migdady H, Abualigah L. Plant disease recognition using residual convolutional enlightened Swin transformer networks. Sci Rep 2024; 14:8660. [PMID: 38622177 PMCID: PMC11018742 DOI: 10.1038/s41598-024-56393-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 03/06/2024] [Indexed: 04/17/2024] Open
Abstract
Agriculture plays a pivotal role in the economic development of a nation, but, growth of agriculture is affected badly by the many factors one such is plant diseases. Early stage prediction of these disease is crucial role for global health and even for game changers the farmer's life. Recently, adoption of modern technologies, such as the Internet of Things (IoT) and deep learning concepts has given the brighter light of inventing the intelligent machines to predict the plant diseases before it is deep-rooted in the farmlands. But, precise prediction of plant diseases is a complex job due to the presence of noise, changes in the intensities, similar resemblance between healthy and diseased plants and finally dimension of plant leaves. To tackle this problem, high-accurate and intelligently tuned deep learning algorithms are mandatorily needed. In this research article, novel ensemble of Swin transformers and residual convolutional networks are proposed. Swin transformers (ST) are hierarchical structures with linearly scalable computing complexity that offer performance and flexibility at various scales. In order to extract the best deep key-point features, the Swin transformers and residual networks has been combined, followed by Feed forward networks for better prediction. Extended experimentation is conducted using Plant Village Kaggle datasets, and performance metrics, including accuracy, precision, recall, specificity, and F1-rating, are evaluated and analysed. Existing structure along with FCN-8s, CED-Net, SegNet, DeepLabv3, Dense nets, and Central nets are used to demonstrate the superiority of the suggested version. The experimental results show that in terms of accuracy, precision, recall, and F1-rating, the introduced version shown better performances than the other state-of-art hybrid learning models.
Collapse
Affiliation(s)
- Ponugoti Kalpana
- Department of Computer Science Engineering, Vels Institute of Science Technology and Advanced Studies, Chennai, Tamil Nadu, 600117, India.
| | - R Anandan
- Department of Computer Science Engineering, Vels Institute of Science Technology and Advanced Studies, Chennai, Tamil Nadu, 600117, India
| | - Abdelazim G Hussien
- Department of Computer and Information Science, Linköping University, Linköping, Sweden.
- Faculty of Science, Fayoum University, Fayoum, Egypt.
| | - Hazem Migdady
- CSMIS Department, Oman College of Management and Technology, 320, Barka, Oman
| | - Laith Abualigah
- Artificial Intelligence and Sensing Technologies (AIST) Research Center, University of Tabuk, 71491, Tabuk, Saudi Arabia
- Computer Science Department, Al Al-Bayt University, Mafraq, 25113, Jordan
- Hourani Center for Applied Scientific Research, Al-Ahliyya Amman University, Amman, 19328, Jordan
- MEU Research Unit, Middle East University, Amman, 11831, Jordan
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos, 13-5053, Lebanon
- School of Computer Sciences, Universiti Sains Malaysia, 11800, George Town, Penang, Malaysia
- School of Engineering and Technology, Sunway University Malaysia, 27500, Petaling Jaya, Malaysia
- Applied Science Research Center, Applied Science Private University, Amman, 11931, Jordan
- College of Engineering, Yuan Ze University, Taoyuan, Taiwan
| |
Collapse
|
3
|
Khan A, Hassan T, Shafay M, Fahmy I, Werghi N, Mudigansalage S, Hussain I. Tomato maturity recognition with convolutional transformers. Sci Rep 2023; 13:22885. [PMID: 38129680 PMCID: PMC10739758 DOI: 10.1038/s41598-023-50129-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 12/15/2023] [Indexed: 12/23/2023] Open
Abstract
Tomatoes are a major crop worldwide, and accurately classifying their maturity is important for many agricultural applications, such as harvesting, grading, and quality control. In this paper, the authors propose a novel method for tomato maturity classification using a convolutional transformer. The convolutional transformer is a hybrid architecture that combines the strengths of convolutional neural networks (CNNs) and transformers. Additionally, this study introduces a new tomato dataset named KUTomaData, explicitly designed to train deep-learning models for tomato segmentation and classification. KUTomaData is a compilation of images sourced from a greenhouse in the UAE, with approximately 700 images available for training and testing. The dataset is prepared under various lighting conditions and viewing perspectives and employs different mobile camera sensors, distinguishing it from existing datasets. The contributions of this paper are threefold: firstly, the authors propose a novel method for tomato maturity classification using a modular convolutional transformer. Secondly, the authors introduce a new tomato image dataset that contains images of tomatoes at different maturity levels. Lastly, the authors show that the convolutional transformer outperforms state-of-the-art methods for tomato maturity classification. The effectiveness of the proposed framework in handling cluttered and occluded tomato instances was evaluated using two additional public datasets, Laboro Tomato and Rob2Pheno Annotated Tomato, as benchmarks. The evaluation results across these three datasets demonstrate the exceptional performance of our proposed framework, surpassing the state-of-the-art by 58.14%, 65.42%, and 66.39% in terms of mean average precision scores for KUTomaData, Laboro Tomato, and Rob2Pheno Annotated Tomato, respectively. This work can potentially improve the efficiency and accuracy of tomato harvesting, grading, and quality control processes.
Collapse
Affiliation(s)
- Asim Khan
- Department of Mechanical Engineering, Khalifa University, Abu Dhabi, UAE
- Khalifa University Center for Robotics and Autonomous Systems (KUCARS), Khalifa University, Abu Dhabi, UAE
| | - Taimur Hassan
- Department of Electrical, Computer and Biomedical Engineering, Abu Dhabi University, Abu Dhabi, UAE
| | - Muhammad Shafay
- Khalifa University Center for Robotics and Autonomous Systems (KUCARS), Khalifa University, Abu Dhabi, UAE
- Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, UAE
| | - Israa Fahmy
- Khalifa University Center for Robotics and Autonomous Systems (KUCARS), Khalifa University, Abu Dhabi, UAE
- Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, UAE
| | - Naoufel Werghi
- Khalifa University Center for Robotics and Autonomous Systems (KUCARS), Khalifa University, Abu Dhabi, UAE
- Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, UAE
| | - Seneviratne Mudigansalage
- Department of Mechanical Engineering, Khalifa University, Abu Dhabi, UAE
- Khalifa University Center for Robotics and Autonomous Systems (KUCARS), Khalifa University, Abu Dhabi, UAE
| | - Irfan Hussain
- Department of Mechanical Engineering, Khalifa University, Abu Dhabi, UAE.
- Khalifa University Center for Robotics and Autonomous Systems (KUCARS), Khalifa University, Abu Dhabi, UAE.
| |
Collapse
|
4
|
Kumar TA, Rajmohan R, Adeola Ajagbe S, Gaber T, Zeng XJ, Masmoudi F. A novel CNN gap layer for growth prediction of palm tree plantlings. PLoS One 2023; 18:e0289963. [PMID: 37566602 PMCID: PMC10420369 DOI: 10.1371/journal.pone.0289963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 07/28/2023] [Indexed: 08/13/2023] Open
Abstract
Monitoring palm tree seedlings and plantlings presents a formidable challenge because of the microscopic size of these organisms and the absence of distinguishing morphological characteristics. There is a demand for technical approaches that can provide restoration specialists with palm tree seedling monitoring systems that are high-resolution, quick, and environmentally friendly. It is possible that counting plantlings and identifying them down to the genus level will be an extremely time-consuming and challenging task. It has been demonstrated that convolutional neural networks, or CNNs, are effective in many aspects of image recognition; however, the performance of CNNs differs depending on the application. The performance of the existing CNN-based models for monitoring and predicting plantlings growth could be further improved. To achieve this, a novel Gap Layer modified CNN architecture (GL-CNN) has been proposed with an IoT effective monitoring system and UAV technology. The UAV is employed for capturing plantlings images and the IoT model is utilized for obtaining the ground truth information of the plantlings health. The proposed model is trained to predict the successful and poor seedling growth for a given set of palm tree plantling images. The proposed GL-CNN architecture is novel in terms of defined convolution layers and the gap layer designed for output classification. There are two 64×3 conv layers, two 128×3 conv layers, two 256×3 conv layers and one 512×3 conv layer for processing of input image. The output obtained from the gap layer is modulated using the ReLU classifier for determining the seedling classification. To evaluate the proposed system, a new dataset of palm tree plantlings was collected in real time using UAV technology. This dataset consists of images of palm tree plantlings. The evaluation results showed that the proposed GL-CNN model performed better than the existing CNN architectures with an average accuracy of 95.96%.
Collapse
Affiliation(s)
- T. Ananth Kumar
- Computer Science and Engineering, IFET College of Engineering, Valavanur, Viluppuram, India
| | - R. Rajmohan
- Department of Computing Technologies, SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu, India
| | - Sunday Adeola Ajagbe
- Department of Computer & Industrial Production Engineering, First Technical University Ibadan, Ibadan, Nigeria
| | - Tarek Gaber
- Computer Science & Software Engineering, University of Salford, Manchester, United Kingdom
- Faculty of Computers and Informatics, Suez Canal University, Ismailia, Egypt
| | - Xiao-Jun Zeng
- Department of Computer Science, University of Manchester, Manchester, United Kingdom
| | - Fatma Masmoudi
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Alkharj, Saudi Arabia
| |
Collapse
|
5
|
Zargari A, Lodewijk GA, Mashhadi N, Cook N, Neudorf CW, Araghbidikashani K, Hays R, Kozuki S, Rubio S, Hrabeta-Robinson E, Brooks A, Hinck L, Shariati SA. DeepSea is an efficient deep-learning model for single-cell segmentation and tracking in time-lapse microscopy. CELL REPORTS METHODS 2023; 3:100500. [PMID: 37426758 PMCID: PMC10326378 DOI: 10.1016/j.crmeth.2023.100500] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 02/01/2023] [Accepted: 05/17/2023] [Indexed: 07/11/2023]
Abstract
Time-lapse microscopy is the only method that can directly capture the dynamics and heterogeneity of fundamental cellular processes at the single-cell level with high temporal resolution. Successful application of single-cell time-lapse microscopy requires automated segmentation and tracking of hundreds of individual cells over several time points. However, segmentation and tracking of single cells remain challenging for the analysis of time-lapse microscopy images, in particular for widely available and non-toxic imaging modalities such as phase-contrast imaging. This work presents a versatile and trainable deep-learning model, termed DeepSea, that allows for both segmentation and tracking of single cells in sequences of phase-contrast live microscopy images with higher precision than existing models. We showcase the application of DeepSea by analyzing cell size regulation in embryonic stem cells.
Collapse
Affiliation(s)
- Abolfazl Zargari
- Department of Electrical and Computer Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Gerrald A. Lodewijk
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Najmeh Mashhadi
- Department of Computer Science and Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Nathan Cook
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Celine W. Neudorf
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
| | | | - Robert Hays
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Sayaka Kozuki
- Department of Molecular, Cell and Developmental Biology, University of California, Santa Cruz, Santa Cruz, CA, USA
- Institute for the Biology of Stem Cells, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Stefany Rubio
- Department of Molecular, Cell and Developmental Biology, University of California, Santa Cruz, Santa Cruz, CA, USA
- Institute for the Biology of Stem Cells, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Eva Hrabeta-Robinson
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Angela Brooks
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Lindsay Hinck
- Department of Molecular, Cell and Developmental Biology, University of California, Santa Cruz, Santa Cruz, CA, USA
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, USA
- Institute for the Biology of Stem Cells, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - S. Ali Shariati
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, USA
- Institute for the Biology of Stem Cells, University of California, Santa Cruz, Santa Cruz, CA, USA
| |
Collapse
|
6
|
Pan J, Xia L, Wu Q, Guo Y, Chen Y, Tian X. Automatic strawberry leaf scorch severity estimation via faster R-CNN and few-shot learning. ECOL INFORM 2022. [DOI: 10.1016/j.ecoinf.2022.101706] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
7
|
Khan A, Asim W, Ulhaq A, Robinson RW. A deep semantic vegetation health monitoring platform for citizen science imaging data. PLoS One 2022; 17:e0270625. [PMID: 35895741 PMCID: PMC9328533 DOI: 10.1371/journal.pone.0270625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Accepted: 06/14/2022] [Indexed: 11/18/2022] Open
Abstract
Automated monitoring of vegetation health in a landscape is often attributed to calculating values of various vegetation indexes over a period of time. However, such approaches suffer from an inaccurate estimation of vegetational change due to the over-reliance of index values on vegetation's colour attributes and the availability of multi-spectral bands. One common observation is the sensitivity of colour attributes to seasonal variations and imaging devices, thus leading to false and inaccurate change detection and monitoring. In addition, these are very strong assumptions in a citizen science project. In this article, we build upon our previous work on developing a Semantic Vegetation Index (SVI) and expand it to introduce a semantic vegetation health monitoring platform to monitor vegetation health in a large landscape. However, unlike our previous work, we use RGB images of the Australian landscape for a quarterly series of images over six years (2015-2020). This Semantic Vegetation Index (SVI) is based on deep semantic segmentation to integrate it with a citizen science project (Fluker Post) for automated environmental monitoring. It has collected thousands of vegetation images shared by various visitors from around 168 different points located in Australian regions over six years. This paper first uses a deep learning-based semantic segmentation model to classify vegetation in repeated photographs. A semantic vegetation index is then calculated and plotted in a time series to reflect seasonal variations and environmental impacts. The results show variational trends of vegetation cover for each year, and the semantic segmentation model performed well in calculating vegetation cover based on semantic pixels (overall accuracy = 97.7%). This work has solved a number of problems related to changes in viewpoint, scale, zoom, and seasonal changes in order to normalise RGB image data collected from different image devices.
Collapse
Affiliation(s)
- Asim Khan
- The Institute for Sustainable Industries and Liveable Cities (ISILC), College of Engineering and Science, Victoria University, Melbourne, Australia
| | - Warda Asim
- The Institute for Sustainable Industries and Liveable Cities (ISILC), College of Engineering and Science, Victoria University, Melbourne, Australia
| | - Anwaar Ulhaq
- The Institute for Sustainable Industries and Liveable Cities (ISILC), College of Engineering and Science, Victoria University, Melbourne, Australia
- School of Computing and Mathematics, Charles Sturt University, Port Macquarie, NSW, Australia
| | - Randall W. Robinson
- The Institute for Sustainable Industries and Liveable Cities (ISILC), College of Engineering and Science, Victoria University, Melbourne, Australia
| |
Collapse
|
8
|
Alpsoy A, Yavuz A, Elpek GO. Artificial intelligence in pathological evaluation of gastrointestinal cancers. Artif Intell Gastroenterol 2021; 2:141-156. [DOI: 10.35712/aig.v2.i6.141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 12/19/2021] [Accepted: 12/27/2021] [Indexed: 02/06/2023] Open
Abstract
The integration of artificial intelligence (AI) has shown promising benefits in many fields of diagnostic histopathology, including for gastrointestinal cancers (GCs), such as tumor identification, classification, and prognosis prediction. In parallel, recent evidence suggests that AI may help reduce the workload in gastrointestinal pathology by automatically detecting tumor tissues and evaluating prognostic parameters. In addition, AI seems to be an attractive tool for biomarker/genetic alteration prediction in GC, as it can contain a massive amount of information from visual data that is complex and partially understandable by pathologists. From this point of view, it is suggested that advances in AI could lead to revolutionary changes in many fields of pathology. Unfortunately, these findings do not exclude the possibility that there are still many hurdles to overcome before AI applications can be safely and effectively applied in actual pathology practice. These include a broad spectrum of challenges from needs identification to cost-effectiveness. Therefore, unlike other disciplines of medicine, no histopathology-based AI application, including in GC, has ever been approved either by a regulatory authority or approved for public reimbursement. The purpose of this review is to present data related to the applications of AI in pathology practice in GC and present the challenges that need to be overcome for their implementation.
Collapse
Affiliation(s)
- Anil Alpsoy
- Department of Pathology, Akdeniz University Medical School, Antalya 07070, Turkey
| | - Aysen Yavuz
- Department of Pathology, Akdeniz University Medical School, Antalya 07070, Turkey
| | - Gulsum Ozlem Elpek
- Department of Pathology, Akdeniz University Medical School, Antalya 07070, Turkey
| |
Collapse
|
9
|
Yoshida H, Kiyuna T. Requirements for implementation of artificial intelligence in the practice of gastrointestinal pathology. World J Gastroenterol 2021; 27:2818-2833. [PMID: 34135556 PMCID: PMC8173389 DOI: 10.3748/wjg.v27.i21.2818] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Revised: 03/16/2021] [Accepted: 04/28/2021] [Indexed: 02/06/2023] Open
Abstract
Tremendous advances in artificial intelligence (AI) in medical image analysis have been achieved in recent years. The integration of AI is expected to cause a revolution in various areas of medicine, including gastrointestinal (GI) pathology. Currently, deep learning algorithms have shown promising benefits in areas of diagnostic histopathology, such as tumor identification, classification, prognosis prediction, and biomarker/genetic alteration prediction. While AI cannot substitute pathologists, carefully constructed AI applications may increase workforce productivity and diagnostic accuracy in pathology practice. Regardless of these promising advances, unlike the areas of radiology or cardiology imaging, no histopathology-based AI application has been approved by a regulatory authority or for public reimbursement. Thus, implying that there are still some obstacles to be overcome before AI applications can be safely and effectively implemented in real-life pathology practice. The challenges have been identified at different stages of the development process, such as needs identification, data curation, model development, validation, regulation, modification of daily workflow, and cost-effectiveness balance. The aim of this review is to present challenges in the process of AI development, validation, and regulation that should be overcome for its implementation in real-life GI pathology practice.
Collapse
Affiliation(s)
- Hiroshi Yoshida
- Department of Diagnostic Pathology, National Cancer Center Hospital, Tokyo 104-0045, Japan
| | - Tomoharu Kiyuna
- Digital Healthcare Business Development Office, NEC Corporation, Tokyo 108-8556, Japan
| |
Collapse
|
10
|
Health Assessment of Eucalyptus Trees Using Siamese Network from Google Street and Ground Truth Images. REMOTE SENSING 2021. [DOI: 10.3390/rs13112194] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Urban greenery is an essential characteristic of the urban ecosystem, which offers various advantages, such as improved air quality, human health facilities, storm-water run-off control, carbon reduction, and an increase in property values. Therefore, identification and continuous monitoring of the vegetation (trees) is of vital importance for our urban lifestyle. This paper proposes a deep learning-based network, Siamese convolutional neural network (SCNN), combined with a modified brute-force-based line-of-bearing (LOB) algorithm that evaluates the health of Eucalyptus trees as healthy or unhealthy and identifies their geolocation in real time from Google Street View (GSV) and ground truth images. Our dataset represents Eucalyptus trees’ various details from multiple viewpoints, scales and different shapes to texture. The experiments were carried out in the Wyndham city council area in the state of Victoria, Australia. Our approach obtained an average accuracy of 93.2% in identifying healthy and unhealthy trees after training on around 4500 images and testing on 500 images. This study helps in identifying the Eucalyptus tree with health issues or dead trees in an automated way that can facilitate urban green management and assist the local council to make decisions about plantation and improvements in looking after trees. Overall, this study shows that even in a complex background, most healthy and unhealthy Eucalyptus trees can be detected by our deep learning algorithm in real time.
Collapse
|