Open this publication in new window or tab >>Show others...
2024 (English)In: Proceedings 24th IEEE International Conference on Data Mining: ICDM 2024 / [ed] Elena Baralis; Kun Zhang; Ernesto Damiani; Meroane Debbah; Panos Kalnis; Xindong Wu, IEEE, 2024, p. 181-190Conference paper, Published paper (Refereed)
Abstract [en]
Counterfactual explanations can be used as a means to explain a models decision process and to provide recommendations to users on how to improve their current status. The difficulty to apply these counterfactual recommendations from the users perspective, also known as burden, may be used to assess the models algorithmic fairness and to provide fair recommendations among different sensitive feature groups. We propose a novel model-agnostic, mathematical programming-based, group counterfactual algorithm that can: (1) detect biases via group counterfactual burden, (2) produce fair recommendations among sensitive groups and (3) identify relevant subgroups of instances through shared counterfactuals. We analyze these capabilities from the perspective of recourse fairness, and empirically compare our proposed method with the state-of-the-art algorithms for group counterfactual generation in order to assess the bias identification and the capabilities in group counterfactual effectiveness and burden minimization.
Place, publisher, year, edition, pages
IEEE, 2024
Keywords
Counterfactual explanations, Algorithmic Fairness, Group counterfactuals, Local explainability
National Category
Computer Systems
Identifiers
urn:nbn:se:su:diva-233353 (URN)10.1109/ICDM59182.2024.00025 (DOI)2-s2.0-86000228096 (Scopus ID)979-8-3315-0668-1 (ISBN)979-8-3315-0669-8 (ISBN)
Conference
24th IEEE International Conference on Data Mining (ICDM 2024), Abu Dhabi, United Arab Emirates, 9-12 December, 2024
2024-09-092024-09-092025-04-28Bibliographically approved