Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Glacier: guided locally constrained counterfactual explanations for time series classification
Stockholms universitet, Samhällsvetenskapliga fakulteten, Institutionen för data- och systemvetenskap.ORCID-id: 0000-0002-8575-421x
Stockholms universitet, Samhällsvetenskapliga fakulteten, Institutionen för data- och systemvetenskap.ORCID-id: 0000-0002-3056-6801
Stockholms universitet, Samhällsvetenskapliga fakulteten, Institutionen för data- och systemvetenskap.ORCID-id: 0000-0002-1357-1967
Visa övriga samt affilieringar
Antal upphovsmän: 52024 (Engelska)Ingår i: Machine Learning, ISSN 0885-6125, E-ISSN 1573-0565, Vol. 113, s. 4639-4669Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

In machine learning applications, there is a need to obtain predictive models of high performance and, most importantly, to allow end-users and practitioners to understand and act on their predictions. One way to obtain such understanding is via counterfactuals, that provide sample-based explanations in the form of recommendations on which features need to be modified from a test example so that the classification outcome of a given classifier changes from an undesired outcome to a desired one. This paper focuses on the domain of time series classification, more specifically, on defining counterfactual explanations for univariate time series. We propose Glacier, a model-agnostic method for generating locally-constrained counterfactual explanations for time series classification using gradient search either on the original space or on a latent space that is learned through an auto-encoder. An additional flexibility of our method is the inclusion of constraints on the counterfactual generation process that favour applying changes to particular time series points or segments while discouraging changing others. The main purpose of these constraints is to ensure more reliable counterfactuals, while increasing the efficiency of the counterfactual generation process. Two particular types of constraints are considered, i.e., example-specific constraints and global constraints. We conduct extensive experiments on 40 datasets from the UCR archive, comparing different instantiations of Glacier against three competitors. Our findings suggest that Glacier outperforms the three competitors in terms of two common metrics for counterfactuals, i.e., proximity and compactness. Moreover, Glacier obtains comparable counterfactual validity compared to the best of the three competitors. Finally, when comparing the unconstrained variant of Glacier to the constraint-based variants, we conclude that the inclusion of example-specific and global constraints yields a good performance while demonstrating the trade-off between the different metrics.

Ort, förlag, år, upplaga, sidor
2024. Vol. 113, s. 4639-4669
Nyckelord [en]
Time series classification, Interpretability, Counterfactual explanation, s Deep learning
Nationell ämneskategori
Annan data- och informationsvetenskap
Forskningsämne
data- och systemvetenskap
Identifikatorer
URN: urn:nbn:se:su:diva-227717DOI: 10.1007/s10994-023-06502-xISI: 001181943800001Scopus ID: 2-s2.0-85187677577OAI: oai:DiVA.org:su-227717DiVA, id: diva2:1847019
Tillgänglig från: 2024-03-26 Skapad: 2024-03-26 Senast uppdaterad: 2024-10-16Bibliografiskt granskad
Ingår i avhandling
1. Constrained Counterfactual Explanations for Temporal Data
Öppna denna publikation i ny flik eller fönster >>Constrained Counterfactual Explanations for Temporal Data
2024 (Engelska)Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
Abstract [en]

Recent advancements in machine learning models for temporal data have demonstrated high performance in predictive tasks like time series prediction and event sequence classification, yet these models often remain opaque. Counterfactual explanations offer actionable insights into these opaque models by suggesting input modifications to achieve desired predictive outcomes. In the context of explainable machine learning methods, there is a challenge in applying counterfactual explanation techniques to temporal data, as most previous research has focused on image or tabular data classification. Moreover, there is a growing need to extend counterfactual constraints to critical domains like healthcare, where it is crucial to incorporate clinical considerations.

To address these challenges, this thesis proposes novel machine learning models to generate counterfactual explanations for temporal data prediction, along with incorporating additional counterfactual constraints. In particular, this thesis focuses on three types of predictive models: (1) event sequence classification, (2) time series classification, and (3) time series forecasting. Furthermore, the integration of local temporal constraints and domain-specific constraints is proposed to emphasize the importance of temporal features and the relevance of application domains through extensive experimentation. 

This thesis is organized into three parts. The first part presents a counterfactual explanation method for medical event sequences, using style-transfer techniques and incorporating additional medical knowledge in modelling. The second part of the thesis focuses on univariate time series classification, proposing a novel solution that utilizes either latent representation or feature space perturbations, additionally incorporating temporal constraints to guide the counterfactual generation. The third part introduces the problem of counterfactual explanations for time series forecasting, proposes a gradient-based method, and extends to integrating domain-specific constraints for diabetes patients. The conclusion of this thesis summarizes the empirical findings and discusses future directions for applying counterfactual methods in real-world scenarios.

Ort, förlag, år, upplaga, sidor
Stockholm: Department of Computer and Systems Sciences, Stockholm University, 2024. s. 84
Serie
Report Series / Department of Computer & Systems Sciences, ISSN 1101-8526 ; 24-015
Nyckelord
Counterfactual explanations; Deep learning; Explainable machine learning; Healthcare
Nationell ämneskategori
Datavetenskap (datalogi)
Forskningsämne
data- och systemvetenskap
Identifikatorer
urn:nbn:se:su:diva-234540 (URN)978-91-8014-979-2 (ISBN)978-91-8014-980-8 (ISBN)
Disputation
2024-12-04, L50, NOD-huset, Borgarfjordsgatan 12, Kista, Stockholm, 09:00 (Engelska)
Opponent
Handledare
Tillgänglig från: 2014-11-11 Skapad: 2024-10-16 Senast uppdaterad: 2024-10-29Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

Förlagets fulltextScopus

Person

Wang, ZhendongSamsten, IsakMiliou, IoannaPapapetrou, Panagiotis

Sök vidare i DiVA

Av författaren/redaktören
Wang, ZhendongSamsten, IsakMiliou, IoannaPapapetrou, Panagiotis
Av organisationen
Institutionen för data- och systemvetenskap
I samma tidskrift
Machine Learning
Annan data- och informationsvetenskap

Sök vidare utanför DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetricpoäng

doi
urn-nbn
Totalt: 151 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf