Published online Jun 28, 2020. doi: 10.35713/aic.v1.i1.31
Peer-review started: March 21, 2020
First decision: April 22, 2020
Revised: May 2, 2020
Accepted: June 7, 2020
Article in press: June 7, 2020
Published online: June 28, 2020
Little attention has been paid to the frequency and preventable causes of discordant classification results of digital pathological image (DPI) analysis using machine learning (ML) for the heterochronously obtained DPIs.
Authors compared the classification results between paired DPIs of the same microscope slide obtained from two independent scans using the same slide scanner.
In this study, the authors elucidated the frequency and preventable causes of discordant classification results of DPI analysis using ML for the heterochronously obtained DPIs.
Authors created paired DPIs by scanning 298 hematoxylin and eosin stained slides containing 584 tissues twice with a virtual slide scanner. The paired DPIs were analyzed by our ML-aided classification model. Differences in color and blur between the non-flipped and flipped groups were compared by L1-norm and a blur index.
Discordant classification results in 23.1% of the paired DPIs obtained by two independent scans of the same microscope slide were observed. No significant difference in the L1-norm of each color channel between the two groups; however, the flipped group showed a significantly higher blur index than the non-flipped group.
The results suggest that differences in the blur - not the color - of the paired DPIs may cause discordant classification results.
An ML-aided classification model for DPI should be tested for this potential cause of the reduced reproducibility of the model. In a future study, a slide scanner and/or a preprocessing method of minimizing DPI blur should be developed.