Retrospective Cohort Study
Copyright ©The Author(s) 2023.
World J Clin Cases. Nov 26, 2023; 11(33): 7951-7964
Published online Nov 26, 2023. doi: 10.12998/wjcc.v11.i33.7951
Table 3 Summary of the values of the hyperparameters for the best random forest, classification and regression tree, Naïve Byer’s classifier, eXtreme gradient boosting
Methods
Hyperparameters
Best value
Meaning
RFMtry8The number of random features used in each tree
Ntree500The number of trees in forest
CARTMinispilt20The minimum number of observations required to attempt a split in a node
Minibucket7The minimum number of observations in a terminal node
Maxdepth10The maximum depth of any node in the final tree
Xval10Number of cross-validations
Cp0.03588Complexity parameter: The minimum improvement required in the model at each node
XGBoostNrounds100The number of tree model iterations
Max_depth3The maximum depth of a tree
Eta0.4Shrinkage coefficient of tree
Gamma0The minimum loss reduction
Subsample0.75Subsample ratio of columns when building each tree
Colsample_bytree0.8Subsample ratio of columns when constructing each tree
Rate_drop0.5Rate of trees dropped
Skip_drop0.05Probability of skipping the dropout procedure during a boosting iteration
Min_child_weight1The minimum sum of instance weight
NBFl0Adjustment of Laplace smoother
UsekernelTRUEUsing kernel density estimate for continuous variable versus a Gaussian density estimate
Adjust1Adjust the bandwidth of the kernel density