Ensemble member selection using multi-objective optimization
2009 (English)In: IEEE Symposium on Computational Intelligence and Data Mining, 2009, 245-251 p.Conference paper (Refereed)
Both theory and a wealth of empirical studies have established that ensembles are more accurate than single predictive models. Unfortunately, the problem of how to maximize ensemble accuracy is, especially for classification, far from solved. In essence, the key problem is to find a suitable criterion, typically based on training or selection set performance, highly correlated with ensemble accuracy on novel data. Several studies have, however, shown that it is difficult to come up with a single measure, such as ensemble or base classifier selection set accuracy, or some measure based on diversity, that is a good general predictor for ensemble test accuracy. This paper presents a novel technique that for each learning task searches for the most effective combination of given atomic measures, by means of a genetic algorithm. Ensembles built from either neural networks or random forests were empirically evaluated on 30 UCI datasets. The experimental results show that when using the generated combined optimization criteria to rank candidate ensembles, a higher test set accuracy for the top ranked ensemble was achieved, compared to using ensemble accuracy on selection data alone. Furthermore, when creating ensembles from a pool of neural networks, the use of the generated combined criteria was shown to generally outperform the use of estimated ensemble accuracy as the single optimization criterion.
Place, publisher, year, edition, pages
2009. 245-251 p.
Research subject Computer and Systems Sciences
IdentifiersURN: urn:nbn:se:su:diva-33426DOI: 10.1109/CIDM.2009.4938656ISBN: 978-1-4244-2765-9OAI: oai:DiVA.org:su-33426DiVA: diva2:283104
IEEE Symposium on Computational Intelligence and Data Mining (CIDM), Nashville, TN, March 30 2009-April 2 2009