Overproduce-and-Select: The Grim Reality
2013 (English)In: 2013 IEEE Symposium on Computational Intelligence and Ensemble Learning (CIEL), IEEE conference proceedings, 2013, 52-59 p.Conference paper (Refereed)
Overproduce-and-select (OPAS) is a frequently used paradigm for building ensembles. In static OPAS, a large number of base classifiers are trained, before a subset of the available models is selected to be combined into the final ensemble. In general, the selected classifiers are supposed to be accurate and diverse for the OPAS strategy to result in highly accurate ensembles, but exactly how this is enforced in the selection process is not obvious. Most often, either individual models or ensembles are evaluated, using some performance metric, on available and labeled data. Naturally, the underlying assumption is that an observed advantage for the models (or the resulting ensemble) will carry over to test data. In the experimental study, a typical static OPAS scenario, using a pool of artificial neural networks and a number of very natural and frequently used performance measures, is evaluated on 22 publicly available data sets. The discouraging result is that although a fairly large proportion of the ensembles obtained higher test set accuracies, compared to using the entire pool as the ensemble, none of the selection criteria could be used to identify these highly accurate ensembles. Despite only investigating a specific scenario, we argue that the settings used are typical for static OPAS, thus making the results general enough to question the entire paradigm.
Place, publisher, year, edition, pages
IEEE conference proceedings, 2013. 52-59 p.
Research subject Computer and Systems Sciences
IdentifiersURN: urn:nbn:se:su:diva-97229DOI: 10.1109/CIEL.2013.6613140ISBN: 978-1-4673-5853-8OAI: oai:DiVA.org:su-97229DiVA: diva2:676273
2013 IEEE Symposium on Computational Intelligence and Ensemble Learning (CIEL), 16-19 April 2013, Singapore