Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Mitigating discrimination in clinical machine learning decision support using algorithmic processing techniques
Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
2020 (English)In: Discovery Science: 23rd International Conference, DS 2020, Thessaloniki, Greece, October 19–21, 2020, Proceedings / [ed] Annalisa Appice, Grigorios Tsoumakas, Yannis Manolopoulos, Stan Matwin, Springer, 2020, p. 19-33Conference paper, Published paper (Refereed)
Abstract [en]

Discrimination on the basis of protected characteristics - such as race or gender - within Machine Learning (ML) is an insufficiently addressed yet pertinent issue. This line of investigation is particularly lacking within clinical decision-making, for which the consequences can be life-altering. Certain real-world clinical ML decision tools are known to demonstrate significant levels of discrimination. There is currently indication that fairness can be improved during algorithmic processing, but this has not been widely examined for the clinical setting. This paper therefore explores the extent to which novel algorithmic processing techniques may be able to mitigate discrimination against protected groups in clinical resource-allocation ML decision-support algorithms. Specifically, three state-of-the-art discrimination mitigation techniques are compared, one for each stage of algorithmic processing, when applied to a real-world clinical ML decision algorithm which is known to discriminate with regards to racial characteristics. The results are promising, revealing that such techniques could significantly improve the fairness of clinical resource-allocation ML decision tools, particularly during pre- and post- processing. Discrimination is shown to be reduced to arbitrary levels at little to no cost to accuracy. Similar studies are needed to consolidate these results. Other future recommendations include working towards a generalisable framework for ML fairness in healthcare.

Place, publisher, year, edition, pages
Springer, 2020. p. 19-33
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 12323
Keywords [en]
fairness, machine Learning, clinical decision support, resource-allocation
National Category
Computer Sciences
Research subject
Computer and Systems Sciences
Identifiers
URN: urn:nbn:se:su:diva-189234DOI: 10.1007/978-3-030-61527-7_2ISBN: 978-3-030-61526-0 (print)ISBN: 978-3-030-61527-7 (electronic)OAI: oai:DiVA.org:su-189234DiVA, id: diva2:1519421
Conference
23rd International Conference, DS 2020, Thessaloniki, Greece (online), October 19–21, 2020
Available from: 2021-01-18 Created: 2021-01-18 Last updated: 2022-02-25Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Hollmén, Jaakko

Search in DiVA

By author/editor
Hollmén, Jaakko
By organisation
Department of Computer and Systems Sciences
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 87 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf