Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Can frequent itemset mining be efficiently and effectively used for learning from graph data?
Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
2012 (English)In: 11th International Conference on Machine Learning and Applications (ICMLA) / [ed] M. Arif Wani, Taghi Khoshgoftaar, Xingquan (Hill) Zhu, Naeem Seliya, IEEE, 2012, Vol. 1, p. 409-414Conference paper, Published paper (Refereed)
Abstract [en]

Standard graph learning approaches are often challenged by the computational cost involved when learning from very large sets of graph data. One approach to overcome this problem is to transform the graphs into less complex structures that can be more efficiently handled. One obvious potential drawback of this approach is that it may degrade predictive performance due to loss of information caused by the transformations. An investigation of the tradeoff between efficiency and effectiveness of graph learning methods is presented, in which state-of-the-art graph mining approaches are compared to representing graphs by itemsets, using frequent itemset mining to discover features to use in prediction models. An empirical evaluation on 18 medicinal chemistry datasets is presented, showing that employing frequent itemset mining results in significant speedups, without sacrificing predictive performance for both classification and regression.

Place, publisher, year, edition, pages
IEEE, 2012. Vol. 1, p. 409-414
Keywords [en]
Graph learning, frequent itemset mining, classification, regression
National Category
Information Systems
Research subject
Computer and Systems Sciences
Identifiers
URN: urn:nbn:se:su:diva-86335DOI: 10.1109/ICMLA.2012.74ISI: 000427260500068ISBN: 978-1-4673-4651-1 (print)OAI: oai:DiVA.org:su-86335DiVA, id: diva2:586639
Conference
11th IEEE International Conference on Machine Learning and Applications (ICMLA), Boca Raton, Florida, USA, December 12-15, 2012
Available from: 2013-01-12 Created: 2013-01-12 Last updated: 2022-02-24Bibliographically approved
In thesis
1. Learning predictive models from graph data using pattern mining
Open this publication in new window or tab >>Learning predictive models from graph data using pattern mining
2014 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Learning from graphs has become a popular research area due to the ubiquity of graph data representing web pages, molecules, social networks, protein interaction networks etc. However, standard graph learning approaches are often challenged by the computational cost involved in the learning process, due to the richness of the representation. Attempts made to improve their efficiency are often associated with the risk of degrading the performance of the predictive models, creating tradeoffs between the efficiency and effectiveness of the learning. Such a situation is analogous to an optimization problem with two objectives, efficiency and effectiveness, where improving one objective without the other objective being worse off is a better solution, called a Pareto improvement. In this thesis, it is investigated how to improve the efficiency and effectiveness of learning from graph data using pattern mining methods. Two objectives are set where one concerns how to improve the efficiency of pattern mining without reducing the predictive performance of the learning models, and the other objective concerns how to improve predictive performance without increasing the complexity of pattern mining. The employed research method mainly follows a design science approach, including the development and evaluation of artifacts. The contributions of this thesis include a data representation language that can be characterized as a form in between sequences and itemsets, where the graph information is embedded within items. Several studies, each of which look for Pareto improvements in efficiency and effectiveness are conducted using sets of small graphs. Summarizing the findings, some of the proposed methods, namely maximal frequent itemset mining and constraint based itemset mining, result in a dramatically increased efficiency of learning, without decreasing the predictive performance of the resulting models. It is also shown that additional background knowledge can be used to enhance the performance of the predictive models, without increasing the complexity of the graphs.

Place, publisher, year, edition, pages
Stockholm: Department of Computer and Systems Sciences, Stockholm University, 2014. p. 118
Series
Report Series / Department of Computer & Systems Sciences, ISSN 1101-8526 ; 14-003
Keywords
Machine Learning, Graph Data, Pattern Mining, Classification, Regression, Predictive Models
National Category
Computer Sciences
Research subject
Computer and Systems Sciences
Identifiers
urn:nbn:se:su:diva-100713 (URN)978-91-7447-837-2 (ISBN)
Public defence
2014-03-25, room B, Forum, Isafjordsgatan 39, Kista, 13:00 (English)
Opponent
Supervisors
Available from: 2014-03-03 Created: 2014-02-11 Last updated: 2022-02-24Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Karunaratne, ThashmeeBoström, Henrik

Search in DiVA

By author/editor
Karunaratne, ThashmeeBoström, Henrik
By organisation
Department of Computer and Systems Sciences
Information Systems

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 71 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf