Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Mitigating discrimination in clinical machine learning decision support using algorithmic processing techniques
Stockholms universitet, Samhällsvetenskapliga fakulteten, Institutionen för data- och systemvetenskap.
2020 (engelsk)Inngår i: Discovery Science: 23rd International Conference, DS 2020, Thessaloniki, Greece, October 19–21, 2020, Proceedings / [ed] Annalisa Appice, Grigorios Tsoumakas, Yannis Manolopoulos, Stan Matwin, Springer, 2020, s. 19-33Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Discrimination on the basis of protected characteristics - such as race or gender - within Machine Learning (ML) is an insufficiently addressed yet pertinent issue. This line of investigation is particularly lacking within clinical decision-making, for which the consequences can be life-altering. Certain real-world clinical ML decision tools are known to demonstrate significant levels of discrimination. There is currently indication that fairness can be improved during algorithmic processing, but this has not been widely examined for the clinical setting. This paper therefore explores the extent to which novel algorithmic processing techniques may be able to mitigate discrimination against protected groups in clinical resource-allocation ML decision-support algorithms. Specifically, three state-of-the-art discrimination mitigation techniques are compared, one for each stage of algorithmic processing, when applied to a real-world clinical ML decision algorithm which is known to discriminate with regards to racial characteristics. The results are promising, revealing that such techniques could significantly improve the fairness of clinical resource-allocation ML decision tools, particularly during pre- and post- processing. Discrimination is shown to be reduced to arbitrary levels at little to no cost to accuracy. Similar studies are needed to consolidate these results. Other future recommendations include working towards a generalisable framework for ML fairness in healthcare.

sted, utgiver, år, opplag, sider
Springer, 2020. s. 19-33
Serie
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 12323
Emneord [en]
fairness, machine Learning, clinical decision support, resource-allocation
HSV kategori
Forskningsprogram
data- och systemvetenskap
Identifikatorer
URN: urn:nbn:se:su:diva-189234DOI: 10.1007/978-3-030-61527-7_2ISBN: 978-3-030-61526-0 (tryckt)ISBN: 978-3-030-61527-7 (digital)OAI: oai:DiVA.org:su-189234DiVA, id: diva2:1519421
Konferanse
23rd International Conference, DS 2020, Thessaloniki, Greece (online), October 19–21, 2020
Tilgjengelig fra: 2021-01-18 Laget: 2021-01-18 Sist oppdatert: 2022-02-25bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekst

Person

Hollmén, Jaakko

Søk i DiVA

Av forfatter/redaktør
Hollmén, Jaakko
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric

doi
isbn
urn-nbn
Totalt: 88 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf