Four open questions on ensemble methods

Posted by Yuling Yao on Mar 01, 2021.       Tag: modeling   computing  

In a recent paper I wrote, I discussed a few open questions on ensemble methods:

  1. Both BMA and stacking are restricted to a linear mixture form, would it be beneficial to consider other aggregation forms such as convolution of predictions or a geometric bridge of predictive densities?
  2. Stacking often relies on some cross-validation, how can we better account for the finite sample variance therein?
  3. While staking can be equipped with many other scoring rules, what is the impact of the scoring rule choice on the convergence rate and robustness?
  4. Beyond current model aggregation tools, can we develop an automated ensemble learner that could fully explore and expand the space of model classes—for example, using an autoregressive (AR) model and a moving-average (MA) model to learn an ARMA model?

I think they are all important directions!