Change search
Link to record
Permanent link

Direct link
Publications (10 of 20) Show all publications
Erman, E., Furendal, M. & Möller, N. (2025). Algorithmic Fairness and Feasibility. Philosophy & Technology, 38(1), Article ID 3.
Open this publication in new window or tab >>Algorithmic Fairness and Feasibility
2025 (English)In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 38, no 1, article id 3Article in journal (Refereed) Published
Abstract [en]

The “impossibility results” in algorithmic fairness suggest that a predictive model cannot fully meet two common fairness criteria – sufficiency and separation – except under extraordinary circumstances. These findings have sparked a discussion on fairness in algorithms, prompting debates over whether predictive models can avoid unfair discrimination based on protected attributes, such as ethnicity or gender. As shown by Otto Sahlgren, however, the discussion of the impossibility results would gain from importing some of the tools developed in the philosophical literature on feasibility. Utilizing these tools, Sahlgren sketches a cautiously optimistic view of how algorithmic fairness can be made feasible in restricted local decisionmaking. While we think it is a welcome move to inject the literature on feasibility into the debate on algorithmic fairness, Sahlgren says very little about what are the general gains of bringing in feasibility considerations in theorizing algorithmic fairness. How, more precisely, does it help us make assessments about fairness in algorithmic decision-making? This is what is addressed in this Reply. More specifically, our two-fold argument is that feasibility plays an important but limited role for algorithmic fairness. We end by offering a sketch of a framework, which may be useful for theorizing feasibility in algorithmic fairness.

Keywords
Algorithmic Fairness, Feasibility, Impossibility, Sahlgren, Base rate Inequality
National Category
Philosophy Ethics
Identifiers
urn:nbn:se:su:diva-242393 (URN)10.1007/s13347-024-00835-8 (DOI)2-s2.0-85214274367 (Scopus ID)
Available from: 2025-04-22 Created: 2025-04-22 Last updated: 2025-09-11Bibliographically approved
Furendal, M. (2025). Collective Ownership of AI. In: Martin Hähnel; Regina Müller (Ed.), A Companion to Applied Philosophy of AI: (pp. 372-386). Hoboken: Wiley-Blackwell
Open this publication in new window or tab >>Collective Ownership of AI
2025 (English)In: A Companion to Applied Philosophy of AI / [ed] Martin Hähnel; Regina Müller, Hoboken: Wiley-Blackwell, 2025, p. 372-386Chapter in book (Refereed)
Abstract [en]

AI technology promises to be both the most socially important and the most profitable technology of a generation. At the same time, the control over – and profits from – the technology is highly concentrated to a handful of large tech companies. This chapter discusses whether bringing AI technology under collective ownership and control is an attractive way of counteracting this development.

It discusses justice-based rationales for collective ownership, such as the claim that, since the training of AI systems relies on a form of enclosing of the data commons, the value created by those systems should be fairly distributed. It also considers democracy-based rationales, like the suggestion that collective ownership is needed to ensure democratic control over the way this crucial technology is being developed and deployed. The chapter also discusses possible forms of collective ownership, like publicly funded advanced AI research and democratically controlled AI companies. It concludes that the case for shifting to a model with collective ownership is the most compelling when based on the concern that the private model could come to concentrate economic and political power to such a degree that it threatens institutions designed to promote justice and democracy.

Place, publisher, year, edition, pages
Hoboken: Wiley-Blackwell, 2025
Keywords
AI, AI governance, Collective ownership, Democracy, Distributive justice
National Category
Philosophy Political Science (Excluding Peace and Conflict Studies)
Identifiers
urn:nbn:se:su:diva-247438 (URN)10.1002/9781394238651.ch26 (DOI)2-s2.0-105015789202 (Scopus ID)9781394238620 (ISBN)9781394238644 (ISBN)9781394238637 (ISBN)
Available from: 2025-09-29 Created: 2025-09-29 Last updated: 2025-09-29Bibliographically approved
Furendal, M. (2025). Reciprocity, Burden-Sharing, and the Individual Duty of Productive Justice. Journal of Business Ethics
Open this publication in new window or tab >>Reciprocity, Burden-Sharing, and the Individual Duty of Productive Justice
2025 (English)In: Journal of Business Ethics, ISSN 0167-4544, E-ISSN 1573-0697Article in journal (Refereed) Epub ahead of print
Abstract [en]

This article develops and defends a novel argument for why individuals have a duty to contribute to the realisation of justice by making a productive contribution. Analysing the shortcomings of attempting to ground such a duty in either reciprocity or the idea of sharing burdens, I suggest that the best alternative is a hybrid account which draws on both. The account says that firstly, everyone must make an equally good effort, in relation to their ability, to contribute the socially necessary labour needed to realise a just state of affairs, where everyone receives what they are due. Secondly, if and when this level is reached, everyone has a conditional obligation to further benefit others, if they benefit from the additional work of others. Opting out is thus permissible, since this part is only triggered if one wants to take part of the social surplus produced by others.

Keywords
Distributive justice, Fair burden-sharing, Productive justice, Reciprocity
National Category
Ethics
Identifiers
urn:nbn:se:su:diva-248391 (URN)10.1007/s10551-025-06139-x (DOI)001589954500001 ()2-s2.0-105018506134 (Scopus ID)
Available from: 2025-10-24 Created: 2025-10-24 Last updated: 2025-10-24
Erman, E. & Furendal, M. (2025). The Democratic Challenges in Global Governance of AI. Current history (1941), 124(858), 3-8
Open this publication in new window or tab >>The Democratic Challenges in Global Governance of AI
2025 (English)In: Current history (1941), ISSN 0011-3530, E-ISSN 1944-785X, Vol. 124, no 858, p. 3-8Article in journal (Refereed) Published
Abstract [en]

Artificial intelligence is bound to have a significant impact on societies and global politics. Given that everyone is affected by the advent of AI technology, its development and deployment should arguably be under democratic control. This essay describes the global AI governance regime complex that has developed in recent years, and discusses what should be democratically governed, who should have a say, and how this should happen. The answers to these questions depend on why we value the democratic ideal and what reasons there are to extend it to the AI domain.

Keywords
artificial intelligence, democratic AI, democratization, global AI governance, technology
National Category
Political Science (Excluding Peace and Conflict Studies)
Identifiers
urn:nbn:se:su:diva-240098 (URN)10.1525/curh.2025.124.858.3 (DOI)2-s2.0-85217668554 (Scopus ID)
Available from: 2025-03-06 Created: 2025-03-06 Last updated: 2025-03-06Bibliographically approved
Erman, E. & Furendal, M. (2025). The Democratic Challenges in the Global Governance of Artificial Intelligence. Current history (1941), 124(858), 3-8
Open this publication in new window or tab >>The Democratic Challenges in the Global Governance of Artificial Intelligence
2025 (English)In: Current history (1941), ISSN 0011-3530, E-ISSN 1944-785X, Vol. 124, no 858, p. 3-8Article in journal (Refereed) Published
Abstract [en]

Artificial intelligence is bound to have a significant impact on societies and global politics. Given that everyone is affected by the advent of AI technology, its development and deployment should arguably be under democratic control. This essay describes the global AI governance regime complex that has developed in recent years, and discusses what should be democratically governed, who should have a say, and how this should happen. The answers to these questions depend on why we value the democratic ideal and what reasons there are to extend it to the AI domain.

Keywords
artificial intelligence, technology, democratic AI, democratization, global AI governance
National Category
Political Science (Excluding Peace and Conflict Studies)
Research subject
Political Science
Identifiers
urn:nbn:se:su:diva-249115 (URN)10.1525/curh.2025.124.858.3 (DOI)
Available from: 2025-11-05 Created: 2025-11-05 Last updated: 2025-11-05Bibliographically approved
Erman, E. & Furendal, M. (2025). The democratic role of non-state actors in the global governance of artificial intelligence. Critical Review of International Social and Political Philosophy
Open this publication in new window or tab >>The democratic role of non-state actors in the global governance of artificial intelligence
2025 (English)In: Critical Review of International Social and Political Philosophy, ISSN 1369-8230, E-ISSN 1743-8772Article in journal (Refereed) Epub ahead of print
Abstract [en]

The emerging global governance of artificial intelligence (AI) is shaped by numerous political actors. Inviting non-state actors into such processes is typically assumed to address a perceived democratic deficit, by promoting increased representation, transparency, and openness. In the AI sphere, however, non-state actors include the same multinational companies that develop the technology to be regulated. Surprisingly, the task of normatively theorizing the democratic role of non-state actors in global AI governance has nevertheless been largely ignored. This paper addresses this by specifying, first, under what conditions non-state actors contribute to the actual democratization of global AI governance, as ‘democratic agents’, and second, under what conditions they instead contribute to the strengthening of the prerequisites for future democratization, as ‘agents of democracy’. We conclude that, although few non-state actors are authorized to act as ‘democratic agents’, their exercise of ‘moral’, ‘epistemic’, and ‘market authority’ could make them legitimate ‘agents of democracy’.

Keywords
Artificial intelligence, democratic agency, democratic role, epistemic authority, global governance, market authority, moral authority, non-state actors
National Category
Political Science (Excluding Peace and Conflict Studies)
Identifiers
urn:nbn:se:su:diva-246090 (URN)10.1080/13698230.2025.2520037 (DOI)001511180100001 ()2-s2.0-105008295636 (Scopus ID)
Available from: 2025-08-28 Created: 2025-08-28 Last updated: 2025-08-28
Erman, E. & Furendal, M. (2024). Artificial Intelligence and the Political Legitimacy of Global Governance. Political Studies, 72(2), 421-441
Open this publication in new window or tab >>Artificial Intelligence and the Political Legitimacy of Global Governance
2024 (English)In: Political Studies, ISSN 0032-3217, E-ISSN 1467-9248, Vol. 72, no 2, p. 421-441Article in journal (Refereed) Published
Abstract [en]

Although the concept of “AI governance” is frequently used in the debate, it is still rather undertheorized. Often it seems to refer to the mechanisms and structures needed to avoid “bad” outcomes and achieve “good” outcomes with regard to the ethical problems artificial intelligence is thought to actualize. In this article we argue that, although this outcome-focused view captures one important aspect of “good governance,” its emphasis on effects runs the risk of overlooking important procedural aspects of good AI governance. One of the most important properties of good AI governance is political legitimacy. Starting out from the assumptions that AI governance should be seen as global in scope and that political legitimacy requires at least a democratic minimum, this article has a twofold aim: to develop a theoretical framework for theorizing the political legitimacy of global AI governance, and to demonstrate how it can be used as a compass for critially assessing the legitimacy of actual instances of global AI governance. Elaborating on a distinction between “governance by AI” and “governance of AI” in relation to different kinds of authority and different kinds of decision-making leads us to the conclusions that much of the existing global AI governance lacks important properties necessary for political legitimacy, and that political legitimacy would be negatively impacted if we handed over certain forms of decision-making to artificial intelligence systems. 

Keywords
artificial intelligence, AI governance, political legitimacy, democracy, global governance
National Category
Political Science (excluding Public Administration Studies and Globalisation Studies)
Identifiers
urn:nbn:se:su:diva-212203 (URN)10.1177/00323217221126665 (DOI)000864214800001 ()2-s2.0-85139203563 (Scopus ID)
Funder
Swedish Research Council, 2018-01549Marianne and Marcus Wallenberg Foundation, MMW 2020.0044
Available from: 2022-12-03 Created: 2022-12-03 Last updated: 2024-09-17Bibliographically approved
Erman, E. & Furendal, M. (2024). The democratization of global AI governance and the role of tech companies. Nature Machine Intelligence, 6, 246-248
Open this publication in new window or tab >>The democratization of global AI governance and the role of tech companies
2024 (English)In: Nature Machine Intelligence, ISSN 2522-5839, Vol. 6, p. 246-248Article in journal (Refereed) Published
Abstract [en]

Can non-state multinational tech companies counteract the potential democratic deficit in the emerging global governance of AI? We argue that although they may strengthen core values of democracy such as accountability and transparency, they currently lack the right kind of authority to democratize global AI governance.

National Category
Political Science (excluding Public Administration Studies and Globalisation Studies)
Identifiers
urn:nbn:se:su:diva-228171 (URN)10.1038/s42256-024-00811-z (DOI)001181015300001 ()2-s2.0-85187154266 (Scopus ID)
Available from: 2024-04-15 Created: 2024-04-15 Last updated: 2024-04-29Bibliographically approved
Furendal, M., Brouwer, H. & van Der Deijl, W. (2024). The Future of the Philosophy of Work. Journal of Applied Philosophy, 41(2), 181-201
Open this publication in new window or tab >>The Future of the Philosophy of Work
2024 (English)In: Journal of Applied Philosophy, ISSN 0264-3758, E-ISSN 1468-5930, Vol. 41, no 2, p. 181-201Article in journal (Refereed) Published
Abstract [en]

Work has always been a significant source of ethical questions, philosophical reflection, and political struggle. Although the future of work in a sense is always at stake, the issue is particularly relevant right now, in light of the advent of advanced AI systems and the collective experience of the COVID-19 pandemic. This has reinvigorated philosophical discussion and interest in the study of the future of work. The purpose of this survey article is to provide an overview of the emerging philosophical field that engages with the future of work, with a special focus on equality and justice, and to outline a research agenda that can help the field to develop further. Section 2 provides some historical context for the current surge in interest in the topic. Then, we discuss what work is and whether there is a philosophy of work (Section 3). The four main sub-debates we then turn to concern the value of work (Section 4), distributive justice and work (Section 5), productive justice (Section 6), and institutional reforms prompted by changes in how work is organized (Section 7). The final section discusses the importance of the distinction between ideal and non-ideal theory in philosophical investigations into the future of work.

National Category
Philosophy
Identifiers
urn:nbn:se:su:diva-228596 (URN)10.1111/japp.12730 (DOI)001198857800001 ()2-s2.0-85190285225 (Scopus ID)
Available from: 2024-04-23 Created: 2024-04-23 Last updated: 2024-09-09Bibliographically approved
Furendal, M. (2024). Tying Ourselves to the Mast, or Acting for the Sake of Justice? Ethos, Individual Duties, and Social Sanctions. Journal of social philosophy, 55(3), 522-540
Open this publication in new window or tab >>Tying Ourselves to the Mast, or Acting for the Sake of Justice? Ethos, Individual Duties, and Social Sanctions
2024 (English)In: Journal of social philosophy, ISSN 0047-2786, E-ISSN 1467-9833, Vol. 55, no 3, p. 522-540Article in journal (Refereed) Published
Abstract [en]

Many contemporary political movements focus more on changing the values and principles that people act on in their daily lives rather than institutions and legal frameworks. Political-philosophical theories of justice, however, often focus more on the Rawlsian “basic structure” than the “ethos” of a just society, and rarely discuss how individuals may be encouraged to act in accordance with principles of justice. This article attempts to redress this, and draws on moral, social and political philosophy to argue that an ethos exists in a group or society either i) when its members internalize and act from a particular principle, or ii) when its members uphold a decentralized system of informal social sanctions that increases their compliance with a particular principle. 

National Category
Political Science (excluding Public Administration Studies and Globalisation Studies)
Identifiers
urn:nbn:se:su:diva-212204 (URN)10.1111/josp.12502 (DOI)000899419200001 ()2-s2.0-85144110018 (Scopus ID)
Available from: 2022-12-03 Created: 2022-12-03 Last updated: 2025-02-13Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-2378-750x

Search in DiVA

Show all publications