Four open questions on ensemble methodsPosted by Yuling Yao on Mar 01, 2021.
In a recent paper I wrote, I discussed a few open questions on ensemble methods:
- Both BMA and stacking are restricted to a linear mixture form, would it be beneficial to consider other aggregation forms such as convolution of predictions or a geometric bridge of predictive densities?
- Stacking often relies on some cross-validation, how can we better account for the finite sample variance therein?
- While staking can be equipped with many other scoring rules, what is the impact of the scoring rule choice on the convergence rate and robustness?
- Beyond current model aggregation tools, can we develop an automated ensemble learner that could fully explore and expand the space of model classes—for example, using an autoregressive (AR) model and a moving-average (MA) model to learn an ARMA model?
I think they are all important directions!