Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Demonstrator on Counterfactual Explanations for Differentially Private Support Vector Machines
RISE Research Institutes of Sweden, Sweden.
Stockholm University, Faculty of Law, Department of Law.ORCID iD: 0000-0002-0694-768x
Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.ORCID iD: 0000-0002-4632-4815
2023 (English)In: Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2022, Grenoble, France, September 19–23, 2022, Proceedings, Part VI / [ed] Massih-Reza Amini; Stéphane Canu; Asja Fischer; Tias Guns; Petra Kralj Novak; Grigorios Tsoumakas, Cham: Springer, 2023, p. 662-666Conference paper, Published paper (Refereed)
Abstract [en]

We demonstrate the construction of robust counterfactual explanations for support vector machines (SVM), where the privacy mechanism that publicly releases the classifier guarantees differential privacy. Privacy preservation is essential when dealing with sensitive data, such as in applications within the health domain. In addition, providing explanations for machine learning predictions is an important requirement within so-called high risk applications, as referred to in the EU AI Act. Thus, the innovative aspects of this work correspond to studying the interaction between three desired aspects: accuracy, privacy, and explainability. The SVM classification accuracy is affected by the privacy mechanism through the introduced perturbations in the classifier weights. Consequently, we need to consider a trade-off between accuracy and privacy. In addition, counterfactual explanations, which quantify the smallest changes to selected data instances in order to change their classification, may become not credible when we have data privacy guarantees. Hence, robustness for counterfactual explanations is needed in order to create confidence about the credibility of the explanations. Our demonstrator provides an interactive environment to show the interplay between the considered aspects of accuracy, privacy, and explainability.

Place, publisher, year, edition, pages
Cham: Springer, 2023. p. 662-666
Series
Lecture Notes in Artificial Intelligence, ISSN 0302-9743, E-ISSN 1611-3349 ; 13718
Keywords [en]
Counterfactual explanations, Support vector machines, Differential privacy
National Category
Law and Society Computer and Information Sciences
Identifiers
URN: urn:nbn:se:su:diva-215639DOI: 10.1007/978-3-031-26422-1_52ISI: 000999152800052Scopus ID: 2-s2.0-85150995194ISBN: 978-3-031-26421-4 (print)ISBN: 978-3-031-26422-1 (electronic)OAI: oai:DiVA.org:su-215639DiVA, id: diva2:1745208
Conference
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2022), Grenoble, France, 19-23 September, 2022
Available from: 2023-03-22 Created: 2023-03-22 Last updated: 2024-10-15Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Greenstein, StanleyPapapetrou, Panagiotis

Search in DiVA

By author/editor
Greenstein, StanleyPapapetrou, Panagiotis
By organisation
Department of LawDepartment of Computer and Systems Sciences
Law and SocietyComputer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 123 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf