Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Classification of Microarrays with kNN: Comparison of Dimensionality Reduction Methods
Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
2007 (English)In: Intelligent Data Engineering and Automated Learning - IDEAL 2007 / [ed] Hujun Yin, Peter Tino, Emilio Corchado, Will Byrne, Xin Yao, Berlin, Heidelberg: Springer Verlag , 2007, 800-809 p.Conference paper, Published paper (Refereed)
Abstract [en]

Dimensionality reduction can often improve the performance of the k-nearest neighbor classifier (kNN) for high-dimensional data sets, such as microarrays. The effect of the choice of dimensionality reduction method on the predictive performance of kNN for classifying microarray data is an open issue, and four common dimensionality reduction methods, Principal Component Analysis (PCA), Random Projection (RP), Partial Least Squares (PLS) and Information Gain(IG), are compared on eight microarray data sets. It is observed that all dimensionality reduction methods result in more accurate classifiers than what is obtained from using the raw attributes. Furthermore, it is observed that both PCA and PLS reach their best accuracies with fewer components than the other two methods, and that RP needs far more components than the others to outperform kNN on the non-reduced dataset. None of the dimensionality reduction methods can be concluded to generally outperform the others, although PLS is shown to be superior on all four binary classification tasks, but the main conclusion from the study is that the choice of dimensionality reduction method can be of major importance when classifying microarrays using kNN.

Place, publisher, year, edition, pages
Berlin, Heidelberg: Springer Verlag , 2007. 800-809 p.
Series
Lecture Notes in Computer Science, 4881/2007
National Category
Information Science
Identifiers
URN: urn:nbn:se:su:diva-37828DOI: 10.1007/978-3-540-77226-2_80ISBN: 978-3-540-77225-5 (print)OAI: oai:DiVA.org:su-37828DiVA: diva2:305374
Conference
8th International Conference on Intelligent Data Engineering and Automated Learning, LNCS 4881
Available from: 2010-03-23 Created: 2010-03-23 Last updated: 2011-06-29Bibliographically approved

Open Access in DiVA

fulltext(360 kB)537 downloads
File information
File name FULLTEXT01.pdfFile size 360 kBChecksum SHA-512
24de53533bd5599960bfad1a7a3445c7fcc6538d17c70985148eb014781e1144c514fdb9b77b58412d368bb67925edf3408e558fd18725fb3da2cd0af53c506a
Type fulltextMimetype application/pdf

Other links

Publisher's full text

Search in DiVA

By author/editor
Deegalla, SampathBoström, Henrik
By organisation
Department of Computer and Systems Sciences
Information Science

Search outside of DiVA

GoogleGoogle Scholar
Total: 537 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 54 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf