CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Nearest Neighbor Classification in High Dimensions
Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.ORCID iD: 0000-0003-1100-8334
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The simple k nearest neighbor (kNN) method can be used to learn from high dimensional data such as images and microarrays without any modification to the original version of the algorithm. However, studies show that kNN's accuracy is often poor in high dimensions due to the curse of dimensionality; a large number of instances are required to maintain a given level of accuracy in high dimensions. Furthermore, distance measurements such as the Euclidean distance may be meaningless in high dimensions. As a result, dimensionality reduction could be used to assist nearest neighbor classifiers in overcoming the curse of dimensionality. Although there are success stories of employing dimensionality reduction methods, the choice of which methods to use remains an open problem. This includes understanding how they should be used to improve the effectiveness of the nearest neighbor algorithm.

The thesis examines the research question of how to learn effectively with the nearest neighbor method in high dimensions. The research question was broken into three smaller questions.  These were addressed by developing effective and efficient nearest neighbor algorithms that leveraged dimensionality reduction. The algorithm design was based on feature reduction and classification algorithms constructed using the reduced features to improve the accuracy of the nearest neighbor algorithm. Finally, forming nearest neighbor ensembles was investigated using dimensionality reduction.

A series of empirical studies were conducted to determine which dimensionality reduction techniques could be used to enhance the performance of the nearest neighbor algorithm in high dimensions. Based on the results of the initial studies, further empirical studies were conducted and they demonstrated that feature fusion and classifier fusion could be used to improve the accuracy further. Two feature and classifier fusion techniques were proposed, and the circumstances in which these techniques should be applied were examined. Furthermore, the choice of the dimensionality reduction method for feature and classifier fusion was investigated. The results indicate that feature fusion is sensitive to the selection of the dimensionality reduction method. Finally, the use of dimensionality reduction in nearest neighbor ensembles was investigated. The results demonstrate that data complexity measures such as the attribute-to-instance ratio and Fisher's discriminant ratio can be used to select the nearest neighbor ensemble depending on the data type.

Place, publisher, year, edition, pages
Stockholm: Department of Computer and Systems Sciences, Stockholm University , 2024. , p. 62
Series
Report Series / Department of Computer & Systems Sciences, ISSN 1101-8526 ; 24-003
Keywords [en]
Nearest Neighbor, High-Dimensional Data, Curse of Dimensionality, Dimensionality Reduction
National Category
Computer Sciences
Research subject
Computer and Systems Sciences
Identifiers
URN: urn:nbn:se:su:diva-225627ISBN: 978-91-8014-645-6 (print)ISBN: 978-91-8014-646-3 (electronic)OAI: oai:DiVA.org:su-225627DiVA, id: diva2:1829796
Public defence
2024-03-05, lilla hörsalen, NOD-huset, Borgarfjordsgatan 12, Kista, 13:00 (English)
Opponent
Supervisors
Available from: 2024-02-09 Created: 2024-01-19 Last updated: 2024-02-02Bibliographically approved
List of papers
1. Reducing High-Dimensional Data by Principal Component Analysis vs. Random Projection for Nearest Neighbor Classification
Open this publication in new window or tab >>Reducing High-Dimensional Data by Principal Component Analysis vs. Random Projection for Nearest Neighbor Classification
2006 (English)In: Proceedings of the Fifth International Conference on Machine Learning and Applications, 2006Conference paper, Published paper (Refereed)
Identifiers
urn:nbn:se:su:diva-37977 (URN)
Available from: 2011-01-18 Created: 2010-03-24 Last updated: 2024-01-19Bibliographically approved
2. Classification of Microarrays with kNN: Comparison of Dimensionality Reduction Methods
Open this publication in new window or tab >>Classification of Microarrays with kNN: Comparison of Dimensionality Reduction Methods
2007 (English)In: Intelligent Data Engineering and Automated Learning - IDEAL 2007 / [ed] Hujun Yin, Peter Tino, Emilio Corchado, Will Byrne, Xin Yao, Berlin, Heidelberg: Springer Verlag , 2007, p. 800-809Conference paper, Published paper (Refereed)
Abstract [en]

Dimensionality reduction can often improve the performance of the k-nearest neighbor classifier (kNN) for high-dimensional data sets, such as microarrays. The effect of the choice of dimensionality reduction method on the predictive performance of kNN for classifying microarray data is an open issue, and four common dimensionality reduction methods, Principal Component Analysis (PCA), Random Projection (RP), Partial Least Squares (PLS) and Information Gain(IG), are compared on eight microarray data sets. It is observed that all dimensionality reduction methods result in more accurate classifiers than what is obtained from using the raw attributes. Furthermore, it is observed that both PCA and PLS reach their best accuracies with fewer components than the other two methods, and that RP needs far more components than the others to outperform kNN on the non-reduced dataset. None of the dimensionality reduction methods can be concluded to generally outperform the others, although PLS is shown to be superior on all four binary classification tasks, but the main conclusion from the study is that the choice of dimensionality reduction method can be of major importance when classifying microarrays using kNN.

Place, publisher, year, edition, pages
Berlin, Heidelberg: Springer Verlag, 2007
Series
Lecture Notes in Computer Science ; 4881/2007
National Category
Information Systems
Identifiers
urn:nbn:se:su:diva-37828 (URN)10.1007/978-3-540-77226-2_80 (DOI)978-3-540-77225-5 (ISBN)
Conference
8th International Conference on Intelligent Data Engineering and Automated Learning, LNCS 4881
Available from: 2010-03-23 Created: 2010-03-23 Last updated: 2024-01-19Bibliographically approved
3. Fusion of Dimensionality Reduction Methods: A Case Study in Microarray Classification
Open this publication in new window or tab >>Fusion of Dimensionality Reduction Methods: A Case Study in Microarray Classification
2009 (English)In: Proceedings of the 12th International Conference on Information Fusion, 2009Conference paper, Published paper (Refereed)
Identifiers
urn:nbn:se:su:diva-33439 (URN)
Available from: 2009-12-23 Created: 2009-12-23 Last updated: 2024-01-19
4. Improving Fusion of Dimensionality Reduction Methods for Nearest Neighbor Classification
Open this publication in new window or tab >>Improving Fusion of Dimensionality Reduction Methods for Nearest Neighbor Classification
2009 (English)In: Proceedings of the Eighth International Conference on Machine Learning and Applications, IEEE Computer Society , 2009Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
IEEE Computer Society, 2009
National Category
Information Systems
Research subject
Computer and Systems Sciences
Identifiers
urn:nbn:se:su:diva-135392 (URN)978-0-7695-3926-3 (ISBN)
Available from: 2016-11-08 Created: 2016-11-08 Last updated: 2024-01-19
5. Choice of Dimensionality Reduction Methods for Feature and Classifier Fusion with Nearest Neighbor Classifiers
Open this publication in new window or tab >>Choice of Dimensionality Reduction Methods for Feature and Classifier Fusion with Nearest Neighbor Classifiers
2012 (English)In: 15th International Conference on Information Fusion, IEEE Computer Society Digital Library, 2012, p. 875-881Conference paper, Published paper (Refereed)
Abstract [en]

Often high dimensional data cause problems for currently used learning algorithms in terms of efficiency and effectiveness. One solution for this problem is to apply dimensionality reduction by which the original feature set could be reduced to a small number of features while gaining improved accuracy and/or efficiency of the learning algorithm. We have investigated multiple dimensionality reduction methods for nearest neighbor classification in high dimensions. In previous studies, we have demonstrated that fusion of different outputs of dimensionality reduction methods, either by combining classifiers built on reduced features, or by combining reduced features and then applying the classifier, may yield higher accuracies than when using individual reduction methods. However, none of the previous studies have investigated what dimensionality reduction methods to choose for fusion, when outputs of multiple dimensionality reduction methods are available. Therefore, we have empirically investigated different combinations of the output of four dimensionality reduction methods on 18 medicinal chemistry datasets. The empirical investigation demonstrates that fusion of nearest neighbor classifiers obtained from multiple reduction methods in all cases outperforms the use of individual dimensionality reduction methods, while fusion of different feature subsets is quite sensitive to the choice of dimensionality reduction methods.

Place, publisher, year, edition, pages
IEEE Computer Society Digital Library, 2012
Keywords
machine learning, nearest neighbor classifier, dimensionality reduction, feature fusion, classifier fusion
National Category
Information Systems
Research subject
Computer and Systems Sciences
Identifiers
urn:nbn:se:su:diva-82219 (URN)978-1-4673-0417-7 (ISBN)978-0-9824438-4-2 (ISBN)
Conference
15th International Conference on Information Fusion, 9-12 July 2012, Singapore
Available from: 2012-11-12 Created: 2012-11-12 Last updated: 2024-01-19Bibliographically approved
6. Random subspace and random projection nearest neighbor ensembles for high dimensional data
Open this publication in new window or tab >>Random subspace and random projection nearest neighbor ensembles for high dimensional data
2022 (English)In: Expert systems with applications, ISSN 0957-4174, E-ISSN 1873-6793, Vol. 191, article id 116078Article in journal (Refereed) Published
Abstract [en]

The random subspace and the random projection methods are investigated and compared as techniques for forming ensembles of nearest neighbor classifiers in high dimensional feature spaces. The two methods have been empirically evaluated on three types of high-dimensional datasets: microarrays, chemoinformatics, and images. Experimental results on 34 datasets show that both the random subspace and the random projection method lead to improvements in predictive performance compared to using the standard nearest neighbor classifier, while the best method to use depends on the type of data considered; for the microarray and chemoinformatics datasets, random projection outperforms the random subspace method, while the opposite holds for the image datasets. An analysis using data complexity measures, such as attribute to instance ratio and Fisher’s discriminant ratio, provide some more detailed indications on what relative performance can be expected for specific datasets. The results also indicate that the resulting ensembles may be competitive with state-of-the-art ensemble classifiers; the nearest neighbor ensembles using random projection perform on par with random forests for the microarray and chemoinformatics datasets.

Keywords
Nearest neighbor ensemble, High dimensional data, Random subspace method, Random projection method
National Category
Information Systems
Research subject
Computer and Systems Sciences
Identifiers
urn:nbn:se:su:diva-200514 (URN)10.1016/J.ESWA.2021.116078 (DOI)000736167200011 ()
Available from: 2022-01-06 Created: 2022-01-06 Last updated: 2024-01-19Bibliographically approved

Open Access in DiVA

fulltext(851 kB)73 downloads
File information
File name FULLTEXT01.pdfFile size 851 kBChecksum SHA-512
75c9747c3b6bef09af1638f8d5d850c297b5f553866e4d835100528cef3c2aa6a66c3cf889f17478a441feeac0f41af5084af8c4ed3d70d3f16227d412fe8bcc
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Deegalla, Sampath
By organisation
Department of Computer and Systems Sciences
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 73 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 821 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf