Review
Copyright ©The Author(s) 2025.
World J Clin Oncol. Jun 24, 2025; 16(6): 104299
Published online Jun 24, 2025. doi: 10.5306/wjco.v16.i6.104299
Table 1 Six Bayesian network parameter learning algorithms
Algorithms
For incomplete datasets
Basic principle
Advantages & disadvantages
Ref.
Maximum likelihood estimateNoEstimates parameters by maximizing the likelihood function based on observed dataFast convergence; no prior knowledge used, leading to slow convergence[18]
Bayesian methodNoUses a prior distribution (often Dirichlet) and updates it with observed data to obtain a posterior distributionIncorporates prior knowledge; computationally intensive[19]
Expectation-maximizationYesEstimates parameters by iteratively applying expectation (E) and maximization (M) steps to handle missing dataEffective with missing data; can converge to local optima[20]
Robust bayesian estimateYesEstimates parameters using probability intervals to represent the ranges of conditional probabilities without assumptionsDoes not require assumptions about missing data; interval width indicates reliability of estimation[12]
Monte-Carlo methodYesUses random sampling (e.g., Gibbs sampling) to estimate the expectation of the joint probability distributionFlexible and can handle complex models; computationally expensive and convergence can be slow[21]