Change search
Link to record
Permanent link

Direct link
Publications (10 of 19) Show all publications
Mondrejevski, L., Azzopardi, D. & Miliou, I. (2025). Predicting Sepsis Onset with Deep Federated Learning. In: Rosa Meo; Fabrizio Silvestri (Ed.), Machine Learning and Principles and Practice of Knowledge Discovery in Databases: International Workshops of ECML PKDD 2023, Turin, Italy, September 18–22, 2023, Revised Selected Papers, Part IV. Paper presented at International Workshops of ECML PKDD 2023, Turin, Italy, September 18–22, 2023 (pp. 73-86). Springer
Open this publication in new window or tab >>Predicting Sepsis Onset with Deep Federated Learning
2025 (English)In: Machine Learning and Principles and Practice of Knowledge Discovery in Databases: International Workshops of ECML PKDD 2023, Turin, Italy, September 18–22, 2023, Revised Selected Papers, Part IV / [ed] Rosa Meo; Fabrizio Silvestri, Springer, 2025, p. 73-86Conference paper, Published paper (Refereed)
Abstract [en]

Life-threatening conditions like sepsis are a leading cause of hospital mortality. The early identification of sepsis onset allows for timely intervention aiming to save patient lives. Although showing great promise for early sepsis onset prediction, Centralized Machine Learning applications are hindered by privacy concerns. Federated Learning has the potential to counteract the mentioned limitation as it trains a global model utilizing distributed data across several hospitals without sharing the data. This research explores the potential of Federated Learning to provide a more privacy-preserving and generalizable solution for predicting sepsis onset using a Deep Federated Learning setup. Patients from the MIMIC-III dataset are classified as either septic or non-septic using relevant patient features, and sepsis onset is identified at the first hour of a detected 5-hour SIRS interval for patients diagnosed with sepsis. We compare the predictive performance of different combinations of classifiers (LSTM and GRU), patient history window lengths, prediction window lengths, and Federated Learning clients, using the metrics AUROC, AUPRC, and F1-Score. Our results show that the Centralized Machine Learning and Federated Learning setups are on par in terms of predictive performance. In addition, on average, the best-performing Federated Learning model is GRU, with a five-hour patient history window and a three-hour prediction window. Overall, the study demonstrates that the proposed Federated Learning setup can predict sepsis onset comparably to state-of-the-art centralized deep learning algorithms for varying numbers of clients, enabling healthcare institutions to collaborate on mutually beneficial tasks without sharing isolated sensitive patient information.

Place, publisher, year, edition, pages
Springer, 2025
Series
Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937 ; 2136 CCIS
Keywords
Classification, Federated Learning, MIMIC-III, Recurrent Neural Network, Sepsis Onset Prediction, Supervised Learning
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:su:diva-240217 (URN)10.1007/978-3-031-74640-6_6 (DOI)2-s2.0-85215978635 (Scopus ID)9783031746390 (ISBN)
Conference
International Workshops of ECML PKDD 2023, Turin, Italy, September 18–22, 2023
Available from: 2025-03-06 Created: 2025-03-06 Last updated: 2025-03-06Bibliographically approved
Krempl, G., Puolamäki, K. & Miliou, I. (2025). Preface. Lecture Notes in Computer Science, 15669 LNCS, v-vi
Open this publication in new window or tab >>Preface
2025 (English)In: Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349, Vol. 15669 LNCS, p. v-viArticle in journal, Editorial material (Refereed) Published
National Category
Computer Sciences
Identifiers
urn:nbn:se:su:diva-244101 (URN)2-s2.0-105005267977 (Scopus ID)
Available from: 2025-06-12 Created: 2025-06-12 Last updated: 2025-06-12Bibliographically approved
Miliou, I., Piatkowski, N. & Papapetrou, P. (Eds.). (2024). Advances in Intelligent Data Analysis XXII: 22nd International Symposium on Intelligent Data Analysis, IDA 2024, Stockholm, Sweden, April 24–26, 2024, Proceedings, Part I (1ed.). Springer Nature
Open this publication in new window or tab >>Advances in Intelligent Data Analysis XXII: 22nd International Symposium on Intelligent Data Analysis, IDA 2024, Stockholm, Sweden, April 24–26, 2024, Proceedings, Part I
2024 (English)Conference proceedings (editor) (Other academic)
Place, publisher, year, edition, pages
Springer Nature, 2024. p. 268 Edition: 1
Series
Lecture Notes in Computer Science (LNCS), ISSN 0302-9743, E-ISSN 1611-3349
National Category
Computer Sciences
Research subject
Computer and Systems Sciences
Identifiers
urn:nbn:se:su:diva-228399 (URN)10.1007/978-3-031-58547-0 (DOI)978-3-031-58546-3 (ISBN)978-3-031-58547-0 (ISBN)
Available from: 2024-04-16 Created: 2024-04-16 Last updated: 2024-04-17Bibliographically approved
Wang, Z., Samsten, I., Miliou, I. & Papapetrou, P. (2024). COMET: Constrained Counterfactual Explanations for Patient Glucose Multivariate Forecasting. In: Annual IEEE Symposium on Computer-Based Medical Systems: 2024 IEEE 37th International Symposium on Computer-Based Medical Systems (CBMS), 26-28 June 2024. Paper presented at 2024 IEEE 37th International Symposium on Computer-Based Medical Systems (CBMS), 26-28 June 2024, Guadalajara, Mexico. (pp. 502-507). IEEE (Institute of Electrical and Electronics Engineers)
Open this publication in new window or tab >>COMET: Constrained Counterfactual Explanations for Patient Glucose Multivariate Forecasting
2024 (English)In: Annual IEEE Symposium on Computer-Based Medical Systems: 2024 IEEE 37th International Symposium on Computer-Based Medical Systems (CBMS), 26-28 June 2024, IEEE (Institute of Electrical and Electronics Engineers) , 2024, p. 502-507Conference paper, Published paper (Refereed)
Abstract [en]

Applying deep learning models for healthcare-related forecasting applications has been widely adopted, such as leveraging glucose monitoring data of diabetes patients to predict hyperglycaemic or hypoglycaemic events. However, most deep learning models are considered black-boxes; hence, the model predictions are not interpretable and may not offer actionable insights into medical practitioners’ decisions. Previous work has shown that counterfactual explanations can be applied in forecasting tasks by suggesting counterfactual changes in time series inputs to achieve the desired forecasting outcome. This study proposes a generalized multivariate forecasting setup of counterfactual generation by introducing a novel approach, COMET, which imposes three domain-specific constraint mechanisms to provide counterfactual explanations for glucose forecasting. Moreover, we conduct the experimental evaluation using two diabetes patient datasets to demonstrate the effectiveness of our proposed approach in generating realistic counterfactual changes in comparison with a baseline approach. Our qualitative analysis evaluates examples to validate that the counterfactual samples are clinically relevant and can effectively lead the patients to achieve a normal range of predicted glucose levels by suggesting changes to the treatment variables.

Place, publisher, year, edition, pages
IEEE (Institute of Electrical and Electronics Engineers), 2024
Series
IEEE International Symposium on Computer-Based Medical Systems, ISSN 2372-918X, E-ISSN 2372-9198
Keywords
Comet, Deep learning, Patents, Time series analysis, Predictive models, Glucose, Diabetes, time series forecasting, blood glucose prediction, counterfactual explanations, deep learning
National Category
Computer Sciences
Research subject
Computer and Systems Sciences
Identifiers
urn:nbn:se:su:diva-233744 (URN)10.1109/CBMS61543.2024.00089 (DOI)001284700700038 ()2-s2.0-85200437241 (Scopus ID)
Conference
2024 IEEE 37th International Symposium on Computer-Based Medical Systems (CBMS), 26-28 June 2024, Guadalajara, Mexico.
Available from: 2024-09-24 Created: 2024-09-24 Last updated: 2024-10-16Bibliographically approved
Wang, Z., Miliou, I., Samsten, I. & Papapetrou, P. (2024). Counterfactual Explanations for Time Series Forecasting. In: 2023 IEEE International Conference on Data Mining (ICDM): . Paper presented at IEEE International Conference on Data Mining (ICDM), 1-4 December 2023, Shanghai, China. (pp. 1391-1396). IEEE conference proceedings
Open this publication in new window or tab >>Counterfactual Explanations for Time Series Forecasting
2024 (English)In: 2023 IEEE International Conference on Data Mining (ICDM), IEEE conference proceedings , 2024, p. 1391-1396Conference paper, Published paper (Refereed)
Abstract [en]

Among recent developments in time series forecasting methods, deep forecasting models have gained popularity as they can utilize hidden feature patterns in time series to improve forecasting performance. Nevertheless, the majority of current deep forecasting models are opaque, hence making it challenging to interpret the results. While counterfactual explanations have been extensively employed as a post-hoc approach for explaining classification models, their application to forecasting models still remains underexplored. In this paper, we formulate the novel problem of counterfactual generation for time series forecasting, and propose an algorithm, called ForecastCF, that solves the problem by applying gradient-based perturbations to the original time series. The perturbations are further guided by imposing constraints to the forecasted values. We experimentally evaluate ForecastCF using four state-of-the-art deep model architectures and compare to two baselines. ForecastCF outperforms the baselines in terms of counterfactual validity and data manifold closeness, while generating meaningful and relevant counterfactuals for various forecasting tasks.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2024
Series
IEEE International Conference on Data Mining. Proceedings, ISSN 1550-4786, E-ISSN 2374-8486
Keywords
Time series forecasting, Counterfactual explanations, Model interpretability, Deep learning
National Category
Computer Sciences
Research subject
Computer and Systems Sciences
Identifiers
urn:nbn:se:su:diva-226602 (URN)10.1109/ICDM58522.2023.00180 (DOI)001165180100171 ()2-s2.0-85185401353 (Scopus ID)979-8-3503-0788-7 (ISBN)
Conference
IEEE International Conference on Data Mining (ICDM), 1-4 December 2023, Shanghai, China.
Available from: 2024-02-14 Created: 2024-02-14 Last updated: 2024-11-14Bibliographically approved
Wang, Z., Samsten, I., Miliou, I., Mochaourab, R. & Papapetrou, P. (2024). Glacier: guided locally constrained counterfactual explanations for time series classification. Machine Learning, 113, 4639-4669
Open this publication in new window or tab >>Glacier: guided locally constrained counterfactual explanations for time series classification
Show others...
2024 (English)In: Machine Learning, ISSN 0885-6125, E-ISSN 1573-0565, Vol. 113, p. 4639-4669Article in journal (Refereed) Published
Abstract [en]

In machine learning applications, there is a need to obtain predictive models of high performance and, most importantly, to allow end-users and practitioners to understand and act on their predictions. One way to obtain such understanding is via counterfactuals, that provide sample-based explanations in the form of recommendations on which features need to be modified from a test example so that the classification outcome of a given classifier changes from an undesired outcome to a desired one. This paper focuses on the domain of time series classification, more specifically, on defining counterfactual explanations for univariate time series. We propose Glacier, a model-agnostic method for generating locally-constrained counterfactual explanations for time series classification using gradient search either on the original space or on a latent space that is learned through an auto-encoder. An additional flexibility of our method is the inclusion of constraints on the counterfactual generation process that favour applying changes to particular time series points or segments while discouraging changing others. The main purpose of these constraints is to ensure more reliable counterfactuals, while increasing the efficiency of the counterfactual generation process. Two particular types of constraints are considered, i.e., example-specific constraints and global constraints. We conduct extensive experiments on 40 datasets from the UCR archive, comparing different instantiations of Glacier against three competitors. Our findings suggest that Glacier outperforms the three competitors in terms of two common metrics for counterfactuals, i.e., proximity and compactness. Moreover, Glacier obtains comparable counterfactual validity compared to the best of the three competitors. Finally, when comparing the unconstrained variant of Glacier to the constraint-based variants, we conclude that the inclusion of example-specific and global constraints yields a good performance while demonstrating the trade-off between the different metrics.

Keywords
Time series classification, Interpretability, Counterfactual explanation, s Deep learning
National Category
Other Computer and Information Science
Research subject
Computer and Systems Sciences
Identifiers
urn:nbn:se:su:diva-227717 (URN)10.1007/s10994-023-06502-x (DOI)001181943800001 ()2-s2.0-85187677577 (Scopus ID)
Available from: 2024-03-26 Created: 2024-03-26 Last updated: 2024-10-16Bibliographically approved
Kuratomi Hernandez, A., Miliou, I., Lee, Z., Lindgren, T. & Papapetrou, P. (2024). Ijuice: integer JUstIfied counterfactual explanations. Machine Learning, 113, 5731-5771
Open this publication in new window or tab >>Ijuice: integer JUstIfied counterfactual explanations
Show others...
2024 (English)In: Machine Learning, ISSN 0885-6125, E-ISSN 1573-0565, Vol. 113, p. 5731-5771Article in journal (Refereed) Published
Abstract [en]

Counterfactual explanations modify the feature values of an instance in order to alter its prediction from an undesired to a desired label. As such, they are highly useful for providing trustworthy interpretations of decision-making in domains where complex and opaque machine learning algorithms are utilized. To guarantee their quality and promote user trust, they need to satisfy the faithfulness desideratum, when supported by the data distribution. We hereby propose a counterfactual generation algorithm for mixed-feature spaces that prioritizes faithfulness through k-justification, a novel counterfactual property introduced in this paper. The proposed algorithm employs a graph representation of the search space and provides counterfactuals by solving an integer program. In addition, the algorithm is classifier-agnostic and is not dependent on the order in which the feature space is explored. In our empirical evaluation, we demonstrate that it guarantees k-justification while showing comparable performance to state-of-the-art methods in feasibility, sparsity, and proximity.

Keywords
Machine Learning, Interpretability, Counterfactuals, Justification, Integer Programming, Graph Network
National Category
Computer Sciences
Research subject
Computer and Systems Sciences
Identifiers
urn:nbn:se:su:diva-227898 (URN)10.1007/s10994-024-06530-1 (DOI)2-s2.0-85188618603 (Scopus ID)
Available from: 2024-04-02 Created: 2024-04-02 Last updated: 2024-09-10Bibliographically approved
Bifet, A., Krilavičius, T., Miliou, I. & Nowaczyk, S. (Eds.). (2024). Machine Learning and Knowledge Discovery in Databases. Applied Data Science Track: European Conference, ECML PKDD 2024, Vilnius, Lithuania, September 9–13, 2024, Proceedings, Part IX. Springer
Open this publication in new window or tab >>Machine Learning and Knowledge Discovery in Databases. Applied Data Science Track: European Conference, ECML PKDD 2024, Vilnius, Lithuania, September 9–13, 2024, Proceedings, Part IX
2024 (English)Conference proceedings (editor) (Other academic)
Place, publisher, year, edition, pages
Springer, 2024. p. 202
Series
Lecture Notes in Computer Science (LNCS), ISSN 0302-9743, E-ISSN 1611-3349 ; volume 14949
National Category
Computer Sciences
Research subject
Computer and Systems Sciences
Identifiers
urn:nbn:se:su:diva-237879 (URN)10.1007/978-3-031-70378-2 (DOI)978-3-031-70378-2 (ISBN)978-3-031-70377-5 (ISBN)
Available from: 2025-01-14 Created: 2025-01-14 Last updated: 2025-01-15Bibliographically approved
Mondrejevski, L., Rugolon, F., Miliou, I. & Papapetrou, P. (2024). MASICU: A Multimodal Attention-based classifier for Sepsis mortality prediction in the ICU. In: 2024 IEEE 37th International Symposium on Computer-Based Medical Systems (CBMS): . Paper presented at 2024 IEEE 37th International Symposium on Computer-Based Medical Systems (CBMS), 26-28 June 20204, Guadalajara, Mexico. (pp. 326-331). IEEE (Institute of Electrical and Electronics Engineers)
Open this publication in new window or tab >>MASICU: A Multimodal Attention-based classifier for Sepsis mortality prediction in the ICU
2024 (English)In: 2024 IEEE 37th International Symposium on Computer-Based Medical Systems (CBMS), IEEE (Institute of Electrical and Electronics Engineers) , 2024, p. 326-331Conference paper, Published paper (Refereed)
Abstract [en]

Sepsis poses a significant threat to public health, causing millions of deaths annually. While treatable with timely intervention, accurately identifying at-risk patients remains challenging due to the condition’s complexity. Traditional scoring systems have been utilized, but their effectiveness has waned over time. Recognizing the need for comprehensive assessment, we introduce MASICU, a novel machine learning model architecture tailored for predicting ICU sepsis mortality. MASICU is a novel multimodal, attention-based classification model that integrates interpretability within an ICU setting. Our model incorporates multiple modalities and multimodal fusion strategies and prioritizes interpretability through different attention mechanisms. By leveraging both static and temporal features, MASICU offers a holistic view of the patient’s clinical status, enhancing predictive accuracy while providing clinically relevant insights.

Place, publisher, year, edition, pages
IEEE (Institute of Electrical and Electronics Engineers), 2024
Series
IEEE International Symposium on Computer-Based Medical Systems, ISSN 2372-918X, E-ISSN 2372-9198
Keywords
Head, Attention mechanisms, Accuracy, Computer architecture, Predictive models, Sepsis, Magnetic heads, Multimodal, Attention, ICU, Mortality Prediction
National Category
Computer Sciences
Research subject
Computer and Systems Sciences
Identifiers
urn:nbn:se:su:diva-233746 (URN)10.1109/CBMS61543.2024.00061 (DOI)001284700700024 ()2-s2.0-85200517080 (Scopus ID)979-8-3503-8472-7 (ISBN)
Conference
2024 IEEE 37th International Symposium on Computer-Based Medical Systems (CBMS), 26-28 June 20204, Guadalajara, Mexico.
Available from: 2024-09-24 Created: 2024-09-24 Last updated: 2024-09-26Bibliographically approved
Kargar-Sharif-Abad, M., Kharazian, Z., Miliou, I. & Lindgren, T. (2024). SHAP-Driven Explainability in Survival Analysis for Predictive Maintenance Applications. In: Sławomir Nowaczyk; Myra Spiliopoulou; Marco Ragni; Olga Fink (Ed.), HAII5.0 2024 Embracing Human-Aware AI in Industry 2024: Proceedings of Workshop on Embracing Human-Aware AI in Industry 5.0 (HAII5.0 2024) co-located with the 27TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE (ECAI 2024),. Paper presented at ECAI: EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, HAII5.0: Embracing Human-Aware AI in Industry 5.0, 19 October 2024, Santiago de Compostela, Spain..
Open this publication in new window or tab >>SHAP-Driven Explainability in Survival Analysis for Predictive Maintenance Applications
2024 (English)In: HAII5.0 2024 Embracing Human-Aware AI in Industry 2024: Proceedings of Workshop on Embracing Human-Aware AI in Industry 5.0 (HAII5.0 2024) co-located with the 27TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE (ECAI 2024), / [ed] Sławomir Nowaczyk; Myra Spiliopoulou; Marco Ragni; Olga Fink, 2024Conference paper, Published paper (Refereed)
Abstract [en]

In the dynamic landscape of industrial operations, ensuring machines operate without interruption is crucial for maintaining optimal productivity levels. Estimating the Remaining Useful Life within Predictive Maintenance is vital for minimizing downtime, improving operational efficiency, and prevent-ing unexpected equipment failures. Survival analysis is a beneficial approach in this context due to its power of handling censored data (here referred to industrial assets that have not experienced a failure during the study period). However, the black-box nature of survival analysis models necessitates the use of explainable AI for greater transparency and interpretability. This study evaluates three Machine Learning-based Survival Analysis models and a traditional Survival Analysis model using real-world data from SCANIA AB, which includes over 90% censored data. Results indicate that Random Survival Forest outperforms the Cox Proportional Hazards model and the Gradient Boosting Survival Analysis and Survival Support vector machine. Additionally, we employ SHAP analysis to provide global and local explanations, highlighting the importance and interaction of features in our best-performing model. To overcome the limitation of applying SHAP on survival output, we utilize a surrogate model. Finally, SHAP identifies specific influential features, shedding light on their effects and interactions. This compre-hensive methodology tackles the inherent opacity of machine learning-based survival analysis models, providing valuable insights into their predictive mechanisms. The findings from our SHAP analysis underscore the pivotal role of these identified features and their interactions, thereby enriching our comprehension of the factors influencing Remaining Useful Life predictions.

Series
CEUR Workshop Proceedings, E-ISSN 1613-0073
Keywords
Explainable Artificial Intelligence, Predictive Maintenance, Survival Analysis, XPdM, Censored data
National Category
Computer Sciences
Research subject
Computer and Systems Sciences
Identifiers
urn:nbn:se:su:diva-234098 (URN)
Conference
ECAI: EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, HAII5.0: Embracing Human-Aware AI in Industry 5.0, 19 October 2024, Santiago de Compostela, Spain.
Available from: 2024-10-07 Created: 2024-10-07 Last updated: 2024-10-09Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-1357-1967

Search in DiVA

Show all publications