Observational Study
Copyright ©The Author(s) 2025.
World J Psychiatry. Sep 19, 2025; 15(9): 108359
Published online Sep 19, 2025. doi: 10.5498/wjp.v15.i9.108359
Table 1 Characteristics of the collected dataset
Feature
Schizophrenia (n = 67)
Healthy control (n = 46)
Sex (female/male)15/5210/36
Mean age, yearsFemale: 45.10 ± 7.72, male: 41.74 ± 9.00Female: 30.60 ± 6.45, male: 36.82 ± 5.24
Age range, yearsFemale: 33-54, male: 18-59Female: 25-45, male: 20-52
Education, yearsFemale: 5.90 ± 1.44, male: 17.00 ± 21.01Female: 8.60 ± 1.25, male: 16.75 ± 3.12
Table 2 Test train image counts in the collected dataset
Diagnosis
Orientation
Train images
Test images
Total images
SchizophreniaLeft to right14614161877
Healthy controlLeft to right9122601172
SchizophreniaTop to bottom14614161877
Healthy controlTop to bottom9122601172
Table 3 Sociodemographic information of participants for the Mendeley dataset
Diagnosis
DME
CNV
Drusen
Normal
Number of patients7097917133548
Mean age, years57838260
Age range, years20-9058-9740-9521-86
Male, n (%)38.354.244.459.2
Female, n (%)61.745.855.640.8
Table 4 Distribution of the images for the Mendeley dataset
Diagnosis
Train images
Test images
Total
CNV3721324237455
DME1135624211598
Drusen86242428866
Normal2632324226565
Table 5 Test results of our proposed convolutional neural network for our collected dataset, n (%)
Dataset
Class
Accuracy
Sensitivity
Specificity
Precision
F1-score
AUROC
From left to rightHealthy control97.4996.9297.8496.5596.7497.38
Schizophrenia97.8496.9298.0797.9597.38
Overall97.3897.3897.3197.3597.38
From top to bottomHealthy control98.9698.0899.5299.2298.6598.80
Schizophrenia99.5298.0898.8199.1698.80
Overall98.8098.8099.0298.9198.80
Table 6 Test results of the proposed Self-AttentionNeXt for the Mendeley dataset (Optical coherence tomography 2017 dataset), n (%)
Classes
Accuracy
Sensitivity
Specificity
Precision
F1-score
AUROC
CNV95.8799.5995.5988.2893.5997.59
DME91.3299.8699.5595.2695.59
Drusen93.3910010096.5896.69
Normal99.1799.0497.1798.1699.10
Overall95.8798.6396.2595.9097.25
Table 7 Comparative results
Ref.
Model
Dataset
Results, n (%)
He et al[51]Swin-poly transformer network OCT2017Accuracy: 99.80; precision: 99.80; recall: 99.80; F1-score: 99.80; AUC: 99.99
Yoo et al[52]Few-shot learning, generative adversarial networkOCT2017Accuracy: 93.90
Huang et al[53]Novel layer guided CNNOCT2017Accuracy: 93.30; sensitivity: 93.30; specificity: 93.30; precision: 91.50
Rajagopalan et al[54]CNN, Kuan filterOCT2017Accuracy: 95.70
Self-AttentionNeXtOCT2017Accuracy: 95.87; sensitivity: 95.86; specificity: 98.62; F1-score: 96.25; precision: 95.89