Comparing methods for generating diverse ensembles of artificial neural networks
2010 (English)In: International Joint Conference on Neural Networks (IJCNN) 2010, 2010, 1-6 p.Conference paper (Refereed)
It is well-known that ensemble performance relies heavily on sufficient diversity among the base classifiers. With this in mind, the strategy used to balance diversity and base classifier accuracy must be considered a key component of any ensemble algorithm. This study evaluates the predictive performance of neural network ensembles, specifically comparing straightforward techniques to more sophisticated. In particular, the sophisticated methods GASEN and NegBagg are compared to more straightforward methods, where each ensemble member is trained independently of the others. In the experimentation, using 31 publicly available data sets, the straightforward methods clearly outperformed the sophisticated methods, thus questioning the use of the more complex algorithms.
Place, publisher, year, edition, pages
2010. 1-6 p.
Research subject Computer and Systems Sciences
IdentifiersURN: urn:nbn:se:su:diva-116680DOI: 10.1109/IJCNN.2010.5596763ISBN: 978-1-4244-6916-1OAI: oai:DiVA.org:su-116680DiVA: diva2:807224
International Joint Conference on Neural Networks (IJCNN), Barcelona, 18-23 July 2010