Interval temporal logics take time intervals, instead of time points, as their primitive temporal entities. One of the most studied interval temporal logics is Halpern and Shoham’s modal logic of time intervals HS, which associates a modal operator with each binary relation between intervals over a linear order (the so-called Allen’s interval relations). In this paper, we compare and classify the expressiveness of all fragments of HS on the class of all linear orders and on the subclass of all dense linear orders. For each of these classes, we identify a complete set of definabilities between HS modalities, valid in that class, thus obtaining a complete classification of the family of all 4096 fragments of HS with respect to their expressiveness. We show that on the class of all linear orders there are exactly 1347 expressively different fragments of HS, while on the class of dense linear orders there are exactly 966 such expressively different fragments.
We introduce a path-based cyclic proof system for first-order μ-calculus, the extension of first-order logic by second-order quantifiers for least and greatest fixed points of definable monotone functions. We prove soundness of the system and demonstrate it to be as expressive as the known trace-based cyclic systems of Dam and Sprenger. Furthermore, we establish cut-free completeness of our system for the fragment corresponding to the modal μ-calculus.
We present sound and complete sequent calculi for the modal mu-calculus with converse modalities, aka two-way modal mu-calculus. Notably, we introduce a cyclic proof system wherein proofs can be represented as finite trees with back-edges, i.e., finite graphs. The sequent calculi incorporate ordinal annotations and structural rules for managing them. Soundness is proved with relative ease as is the case for the modal mu-calculus with explicit ordinals. The main ingredients in the proof of completeness are isolating a class of non-wellfounded proofs with sequents of bounded size, called slim proofs, and a counter-model construction that shows slimness suffices to capture all validities. Slim proofs are further transformed into cyclic proofs by means of re-assigning ordinal annotations.
The thesis investigates the implications for moral philosophy of research in psychology. In addition to an introduction and concluding remarks, the thesis consists of four chapters, each exploring various more specific challenges or inputs to moral philosophy from cognitive, social, personality, developmental, and evolutionary psychology. Chapter 1 explores and clarifies the issue of whether or not morality is innate. The chapter’s general conclusion is that evolution has equipped us with a basic suite of emotions that shape our moral judgments in important ways. Chapter 2 presents and investigates the challenge presented to deontological ethics by Joshua Greene’s so-called dual process theory. The chapter partly agrees with his conclusion that the dual process view neutralizes some common criticisms against utilitarianism founded on deontological intuitions, but also points to avenues left to explore for deontologists. Chapter 3 focuses on Katarzyna de Lazari-Radek and Peter Singer’s suggestion that utilitarianism is less vulnerable to so-called evolutionary debunking than other moral theories. The chapter is by and large critical of their attempt. In the final chapter 4, attention is directed at the issue of whether or not social psychology has shown that people lack stable character traits, and hence that the virtue ethical view is premised on false or tenuous assumptions. Though this so-called situationist challenge at one time seemed like a serious threat to virtue ethics, the chapter argues for a moderate position, pointing to the fragility of much of the empirical research invoked to substantiate this challenge while also suggesting revisions to the virtue ethical view as such.
A set of moral problems known as The Trolley Dilemmas was presented to 3000 randomly selected inhabitants of the USA, Russia and China. It is shown that Chinese are significantly less prone to support utility-maximizing alternatives, as compared to the US and Russian respondents.
A number of possible explanations, as well as methodological issues pertaining to the field of surveying moral judgment and moral disagreement, are discussed.
In this study we tested the fruitfulness of advanced bibliometric methods for mapping subdomains in philosophy. The development of the number of publications on free will and sorites, the two subdomains treated in the study, over time was studied. We applied the cocitation approach to map the most cited publications, authors and journals, and we mapped frequently occurring terms, using a term co-occurrence approach. Both subdomains show a strong increase of publications in Web of Science. When we decomposed the publications by faculty, we could see an increase of free will publications also in social sciences, medicine and natural sciences. The multidisciplinary character of free will research was reflected in the cocitation analysis and in the term co-occurrence analysis: we found clusters/groups of cocited publications, authors and journals, and of co-occurring terms, representing philosophy as well as non-philosophical fields, such as neuroscience and physics. The corresponding analyses of sorites publications displayed a structure consisting of research themes rather than fields. All in all, both philosophers involved in this study acknowledge the validity of the various networks presented. Bibliometric mapping appears to provide an interesting tool for describing the cognitive orientation of a research field, not only in the natural and life sciences but also in philosophy, which this study shows.
In this article we distinguish two versions of the non-identity problem: one involving positive well-being and one involving negative well-being. Intuitively, there seems to be a difference between the two versions of the problem. In the negative case it is clear that one ought to cause the better-off person to exist. However, it has recently been suggested that this is not so in the positive case. We argue that such an asymmetrical treatment of the two versions should be rejected and that this is evidence against views according to which it is permissible to cause the less well-off person to exist in the positive non-identity case.
The abundance of perceived 'possibilities' for prevention contrasts sharply with the difficulties that face preventive programmes. We argue that this situation has emerged from an incomplete understanding of the process of prevention, involving a mixture of biological factors, human decision making and time perspectives. Based on examples, an analysis of the factors in the prevention process is presented.
The main purpose of this paper is to refute the ‘methodological continuation’ argument supporting epistemic realism in metaphysics. This argument aims to show that scientific realists have to accept that metaphysics is as rationally justified as science given that they both employ inference to the best explanation, i.e. that metaphysics and science are methodologically continuous. I argue that the reasons given by scientific realists as to why inference to the best explanation (IBE) is reliable in science do not constitute a reason to believe that it is reliable in metaphysics. The justification of IBE in science and the justification of IBE in metaphysics are two distinct issues with only superficial similarities, and one cannot rely on one for the other. This becomes especially clear when one analyses the debate about the legitimacy of IBE that has taken place between realists and empiricists. The metaphysician seeking to piggyback on the realist defense of IBE in science by the methodological continuation argument presupposes that the defense is straightforwardly applicable to metaphysics. I will argue that it is, in fact, not. The favored defenses of IBE in scientific realism make extensive use of empirical considerations, predictive power and inductive evidence, all of which are paradigmatically absent in the metaphysical context. Furthermore, I argue that the metaphysician, even if the realist would concede to the methodological continuation argument, fails to offer any agreed upon conclusions resulting from its application in metaphysics. As a result, the scientific realist is not committed to believing that there is metaphysical knowledge.
The main purpose of this paper is to refute the ‘methodological continuity’ argument supporting epistemic realism in metaphysics. This argument aims to show that scientific realists have to accept that metaphysics is as rationally justified as science given that they both employ inference to the best explanation, i.e. that metaphysics and science are methodologically continuous. I argue that the reasons given by scientific realists as to why inference to the best explanation (IBE) is reliable in science do not constitute a reason to believe that it is reliable in metaphysics. The justification of IBE in science and the justification of IBE in metaphysics are two distinct issues with only superficial similarities, and one cannot rely on one for the other. This becomes especially clear when one analyses the debate about the legitimacy of IBE that has taken place between realists and empiricists. The metaphysician seeking to piggyback on the realist defense of IBE in science by the methodological continuity argument presupposes that the defense is straightforwardly applicable to metaphysics. I will argue that it is, in fact, not. The favored defenses of IBE by scientific realists make extensive use of empirical considerations, predictive power and inductive evidence, all of which are paradigmatically absent in the metaphysical context. Furthermore, even if the realist would concede the methodological continuity argument, I argue that the metaphysician fails to offer any agreed upon conclusions resulting from its application in metaphysics.
Astrophysics is a scientific field with a rich ontology of individual processes and general phenomena that occur in our universe. Despite its central role in our understanding of the physics of the universe, astrophysics has largely been ignored in the debate on scientific realism. As a notable exception, Hacking (Philos Sci 56(4):555–581, 1989) argues that the lack of experiments in astrophysics forces us to be anti-realist with respect to the entities which astrophysics claim inhabit the universe. In this paper, I investigate the viability of astrophysical realism about black holes, given other formulations of entity realism, specifically Cartwright’s (How the Laws of Physics Lie. Oxford University Press, 1983), and Chakravartty’s (A Metaphysics for Scientific Realism: Knowing the Unobervable. Cambridge University Press, 2007) versions of entity realism. I argue that on these accounts of entity realism, you cannot be a realist with respect to black holes, and likewise, if you want to be a realist about black holes, you cannot be an entity realist of these particular strands.
I explore the process of changes in the observability of entities and objects in science and how such changes impact two key issues in the scientific realism debate: the claim that predictively successful elements of past science are retained in current scientific theories, and the inductive defense of a specific version of inference to the best explanation with respect to unobservables. I provide a case-study of the discovery of radium by Marie Curie in order to show that the observability of some entities can change and that such changes are relevant for arguments seeking to establish the reliability of success-to-truth inferences with respect to unobservables.
One of the core charges against explanationist scientific realism is that is too epistemically optimistic. Taking the charge seriously, some realists offer alternative forms of scientific realism – semi-realism and theoretical irrealism – designed to be more modestin their epistemic claims. In this paper, I consider two cases in cosmology and astrophysics that raise novel issues for both views: semi-realism is argued to end up making astrophysics metaphysically inflated when confronted with cases regarding the existence and evolution of galaxies and other astrophysical objects that cross the cosmic event horizon; theoretical irrealism is argued to be in serious tension with standard evidential reasoning in the context of the dark matter problem.
It is widely believed that science is in the business of finding out what the world is really like. The philosophical version of this belief is scientific realism -- a doctrine about science that tells us that we ought to believe that the best theories in science are true, and that the world is occupied with the objects that those theories contain. If scientific realism was not correct, the argument goes, the incredible success of science would be a miracle. The best explanation for the success of science however, is not that it is a miracle, but that scientific theories are true. This argument is an instance of inference to the best explanation, or IBE. Skeptics have questioned why scientific success must imply its truth given that there are so many abandoned, false scientific theories in the history of science that were nevertheless successful. One of the controversies in the debate between scientific realists and anti-realists surrounds the legitimacy of reasoning in accordance with IBE. Realists need IBE to be a justified and reliable guide to truth. In this compilation thesis, I address various questions related to IBE and scientific realism. Paper 1 argues that scientific realism without IBE loses too much of its epistemic optimism, and that it in some contexts even becomes more pessimistic than the most prominent rival philosophical doctrine about science -- constructive empiricism. To avoid deflating realism, I argue, a defense of IBE is necessary. Paper 2 addresses whether methodological similarities between science and metaphysics force scientific realists to also be realists with respect to metaphysics. If IBE is legitimate, it should not only be valid in science, but also in metaphysics, effectively inflating the ontology that scientific realists are rationally bound to accept. I argue against this conclusion. Paper 3 offers a proof of concept regarding a novel way to justify inferences to unobservable objects. Paper 4 establishes a novel critique of non-probabilistic versions of IBE in scientific realism.
Scientific realism driven by inference to the best explanation (IBE) takes empirically confirmed objects to exist, independent, pace empiricism, of whether those objects are observable or not. This kind of realism, it has been claimed, does not need probabilistic reasoning to justify the claim that these objects exist. But I show that there are scientific contexts in which a non-probabilistic IBE-driven realism leads to a puzzle. Since IBE can be applied in scientific contexts in which empirical confirmation has not yet been reached, realists will in these contexts be committed to the existence of empirically unconfirmed objects. As a consequence of such commitments, because they lack probabilistic features, the possible empirical confirmation of those objects is epistemically redundant with respect to realism.
In his natural philosophy, John Buridan reinterprets Aristotelian conceptions of necessity using a framework derived from his logical writings. After a discussion of Buridan’s account of varieties of necessity, in this paper I shall approach some interpretative uses of that account where two natural philosophical concerns are involved. The first is connected with the relationship of modality and time in a question from the first book of his commentary to De Generatione et Corruptione addressing a consequence from possibilities of alteration to possibilities of generation. The content of that question hinges on the metaphysical connection between alteration and substantial changes. In the third section, I shall explore a quasi-definition of causal necessity and contingency Buridan discusses in the second book of his commentary to the Physics. Buridan’s discussion of alternative descriptions of causal necessity and contingency in that context reveals competing pictures of the role of essences in causal explanation associated with Avicenna and Averroes respectively.
This dissertation is a study of John Buridan's (c.1300-c.1361) conception of modalities. Modal concepts - concepts of necessity, possibility, impossibility, and contingency - describe the ways in which things could and could not be otherwise. These concepts became notoriously central for philosophical discourse in the late Middle Ages. In recent years, Buridan's philosophy and modal theory have received sophisticated scholarly attention. The main contribution of the dissertation is to show new ways in which Buridan's modal theory is embedded in its contextual practical aims, as providing methods for argumentation schemes and analysis used in his natural philosophy and metaphysics.
The dissertation is divided into two parts. In Part I, I conduct a detailed analysis of Buridan's account of varieties of modality in logical contexts. In Chapter 2, I show that Buridan distinguishes between broad and restricted forms of necessity in his treatment of logical consequence. Moreover, I show how the distinction between these forms of necessity underpins his modal syllogistics. I argue in Chapter 3 that Buridan acknowledged a variety of modal concepts that are distinguished as a matter of degree. I identify the main modal concepts Buridan's theory reckons with, show how he motivates the distinctions among them, and clarify how they are logically related. Part II turns to applications of Buridan's modal analyses to natural philosophy. In Chapter 4, I address the relationship between necessity with sempiternal truth in Buridan's commentary on Aristotle's De Caelo and compare Buridan's treatment of a key passage in that commentary with the treatment by John of Jandun (c. 1285-1328), a near-contemporary master of arts at Paris. Chapter 5 focuses on Buridan's account of the relationship between power-based concepts of modality and his modal semantics. Chapter 6 describes Buridan's account of contingency in the Physics, and sets Buridan's account of the relationship between forms of contingency and chance against the background of a received debate between Avicenna's and Averroes' views on the subject. Finally, in Chapter 7, I analyse some important applications of Buridan's distinction between logical and metaphysical possibility in physical contexts. I conclude this section by showing how Buridan considered merely conceivable possibilities useful in natural philosophy, and draw further conclusions for investigating the connections between logic and natural philosophy in the later Middle Ages.
Naive speakers find some logical contradictions acceptable, specifically borderline contradictions involving vague predicates such as Joe is and isn't tall. In a recent paper, Cobreros et al. (J Philos Logic, 2012) suggest a pragmatic account of the acceptability of borderline contradictions. We show, however, that the pragmatic account predicts the wrong truth conditions for some examples with disjunction. As a remedy, we propose a semantic analysis instead. The analysis is close to a variant of fuzzy logic, but conjunction and disjunction are interpreted as intensional operators.
This essay advances a libertarian theory of moral rights, which responds effectively to some serious objections that have been raised against libertarianism. I show how libertarianism can explain children’s rights to certain physical integrity and aid. I defend strong moral rights of human, pre-natal organisms, infants and children against all agents to certain non-interference with their physical integrity. I also argue that parents’ moral obligation to aid their offspring follows from a moral principle that prohibits agents to actively harm rights-bearers. Since this is the core principle of all versions of libertarianism, we gain simplicity and coherence. In chapter two, I explain my theory’s similarities and differences to a libertarian theory of moral rights advanced by Robert Nozick in his 1974 book Anarchy, State, and Utopia. I explain the structure and coherence of negative moral rights as advanced by Nozick. Then, I discuss what these negative rights are rights to, and the criteria for being a rights-bearer. In chapter three, I formulate a clear distinction between active and passive behaviour, and discuss the moral importance of foreseeing consequences of one’s active interventions. In chapter four, I claim that some pre-natal human organisms, human infants, and children, are rights-bearers. I formulate a morally relevant characterization of potentiality, and argue that possession of such potentiality is sufficient to have negative rights against all agents. In chapter five, I discuss whether potential moral subjects, in addition, have positive moral rights against all agents to means sufficient to develop into actual moral subjects. I argue that this suggestion brings some difficulties when applied to rights-conflicts. In chapter six, I argue that potential moral subjects’ rights to means necessary to develop into actual moral subjects can be defended in terms of merely negative rights. By adopting the view advanced in this chapter, we get a simple, coherent theory. It avoids the difficulties in the view advanced in chapter five, while keeping its intuitively plausible features. In chapter seven, I discuss whether the entitlement theory is contradictory and morally repugnant. I argue that my version of the entitlement theory is not.
This paper shows that versions of prioritarianism that focus at least partially on well-being levels at certain times conflict with conventional views of prudential value and prudential rationality. So-called timeslice prioritarianism, and pluralist views that ascribe importance to timeslices, hold that a benefit matters more, the worse off the beneficiary is at the time of receiving it. We show that views that evaluate outcomes in accordance with this idea entail that an agent who delays gratification makes an outcome worse, even if it is better for the agent and worse for no one else. We take this to show that timeslice prioritarianism and some pluralist views violate Weak Pareto, and we argue that these versions of prioritarianism are implausible.
Computational linguistics studies natural language in its various manifestations from a computational point of view, both on the theoretical level (modeling grammar modules dealing with natural language form and meaning, and the relation between these two) and on the practical level (developing applications for language and speech technology). Right from the start in the 1950ties, there have been strong links with computer science, logic, and many areas of mathematics - one can think of Chomsky's contributions to the theory of formal languages and automata, or Lambek's logical modeling of natural language syntax. The symposium on Logic and Algorithms in Computational Linguistics 2018 (LACompLing2018) assesses the place of logic, mathematics, and computer science in present day computational linguistics. It intends to be a forum for presenting new results as well as work in progress.
A Course in Behavioral Economics 2e is an accessible and self-contained introduction to the field of behavioral economics. The author introduces students to the subject by comparing and contrasting its theories and models with those of mainstream economics. Full of examples, exercises and problems, this book emphasises the intuition behind the concepts and is suitable for students from a wide range of disciplines.
In un saggio ricco di umorismo e storie accattivanti, anche in controtendenza con alcuni scenari pessimistici, Angner mostra il potere salutare dell'economia per rendere il mondo un posto più adatto al benessere. Tutte le sfide e le crisi che l'umanità ha di fronte sono in qualche modo causate da azioni umane, individuali e collettive, e ogni soluzione implica trovare il modo di far comportare le persone diversamente dal solito. L'economia ha a disposizione alcuni degli strumenti più potenti per intervenire in tal senso: essa, infatti, non riguarda soltanto questioni squisitamente monetarie o materiali, come fare previsioni sul mercato azionario o promuovere gli interessi di chi detiene il potere, ma può risolvere una grande varietà di problemi proprio perché aiuta a modificare i comportamenti individuali e le conseguenze di natura sociale.
Il tema del decision making è diventato centrale per la gestione della complessità nel mondo odierno: gran parte delle attività umane (finanza, scienza, medicina, arte e la vita in generale) può essere interpretata come una questione di persone che fanno un certo tipo di scelte. Questo volume è un’introduzione rigorosa, ma al tempo stesso accattivante, a uno degli sviluppi più recenti delle scienze sociali, avvalendosi anche dei risultati della psicologia cognitiva. L’economia comportamentale, infatti, parte dal presupposto che molte decisioni non siano assunte in base a criteri logici e razionali, ma che anzi spesso i comportamenti degli individui siano dettati da altri fattori; in base a questo assunto le “deviazioni dalla razionalità perfetta” non sono trascurabili (come ritenuto dall’economia neoclassica) ma anzi sistematiche e dunque abbastanza prevedibili, tanto da garantire lo sviluppo di nuove teorie descrittive della decisione. Partendo sempre dalle fondamenta della scuola economica neoclassica, il volume riesce a spiegare chiaramente i concetti fondamentali dell’economia comportamentale e illustrarne le intuizioni che vi stanno dietro. Una ricca selezione di applicazioni di economia, management, marketing, scienza politica e politica pubblica corredano la trattazione, mostrando quanto l’economia comportamentale possa essere uno strumento fondamentale per le persone e per il decisore pubblico. Non è richiesta una conoscenza avanzata della matematica.
I varje del av vardagen gör sig ekonomin gällande. Samtidigt styr den också de stora rörelserna i världen. Men drivs våra världar av kloka ekonomiska beslut? Ekonomen och filosofen Erik Angner visar i sin uppmärksammade bok att det finns en stor förbättringspotential. Genom ett hållbart ekonomiskt förhållningssätt kan vi få bukt med de problem som vi har framför oss, vare sig det gäller pandemier, svält, klimatkatastrofer eller fattigdom.
I En bättre värld är möjlig ger författaren både konkreta och visionära förslag på hur vi på mikro- och makronivå kan skapa en bättre värld för alla jordens medborgare.
Economics has the power to make the world a better, happier and safer place: this book shows you how our world is in a mess. The challenges of climate change, inequality, hunger and a global pandemic mean our way of life seems more imperilled and society more divided than ever; but economics can help!
From parenting to organ donation, housing to anti-social behaviour, economics provides the tools we need to fix the biggest issues of today. Far from being a means to predict the stock market or enrich the elite, economics provides a lens through which we can better understand how things work, design clever solutions and create the conditions in which we can all flourish.
With a healthy dose of optimism, and packed with stories of economics in everyday situations, Erik Angner demonstrates the methods he and his fellow economists use to help improve our lives and the society in which we live. He shows us that economics can be a powerful force for good, awakening the possibility of a happier, more just and more sustainable world.
Economics is permeated with value judgements, and removing them would be neither possible nor desirable. They are consequential, in the sense that they have a sizeable impact on economists’ output. Yet many economists may not even realise they are there. This paper surveys ways in which values influence economic theory and practice and explores some implications for the manner in which economics – especially welfare economics – is taught, practised and communicated. Explicit attention to values needs to be embedded in the teaching of economics at all levels.
Behavioral economics has long defined itself in opposition to neoclassical economics, but recent developments suggest a synthesis may be on the horizon. In particular, several economists have argued that behavioral factors can be incorporated into standard theory, and that the days of behavioral economics are therefore numbered. This paper explores the proposed synthesis and argues that it is distinctly behavioral in nature – not neoclassical. Far from indicating that behavioral economics as a stand-alone research program is over, the proposed synthesis represents the consummate conversion of neoclassical economists into behavioral ones.
Daniel M. Hausman holds that preferences in economics are total subjective comparative evaluations—subjective judgments to the effect that something is better than something else all things told—and that economists are right to employ this conception of preference. Here, I argue against both parts of Hausman’s thesis. The failure of Hausman’s account, I continue, reflects a deeper problem, that is, that preferences in economics do not need an explicit definition of the kind that he seeks. Nonetheless, Hausman’s labors were not in vain: his accomplishment is that he has articulated a useful model of the theory.
What, if anything, is problematic about the involvement of celebrities in democratic politics? While a number of theorists have criticized celebrity involvement in politics, none so far have examined this issue using the tools of social epistemology, the study of the effects of social interactions, practices, and institutions on knowledge and belief acquisition. We will draw on these resources to investigate the issue of celebrity involvement in politics, specifically as this involvement relates to democratic theory and its implications for democratic practice. We will argue that an important and underexplored form of power, which we will call epistemic power, can explain one important way in which celebrity involvement in politics is problematic. This is because unchecked uses and unwarranted allocations of epistemic power, which celebrities tend to enjoy, threaten the legitimacy of existing democracies and raise important questions regarding core commitments of deliberative, epistemic, and plebiscitary models of democratic theory. We will finish by suggesting directions that democratic theorists could pursue when attempting to address some of these problems.
How should academics respond to the work of immoral intellectuals? This question is of increasing concern in academic circles, but has received little attention in the academic literature. In this article, we will investigate what our response to immoral intellectuals should be. We begin by outlining the cases of three intellectuals who have behaved immorally or at least have been accused of doing so. We then investigate whether it is appropriate to admire an immoral person for their intellectual contributions. We will argue that such admiration can be a fitting response to the intellectual achievements of an immoral person, but only if the person has indeed done something important. However, we then identify two moral reasons against openly admiring immoral intellectuals. First, that such admiration may give the appearance of condoning the immoral acts of the intellectual. Second, that such admiration may lead to emulation of the intellectual's problematic ideals. This may be enough to persuade us of the moral reasons to avoid engaging with the work of unimportant and easily replaceable intellectuals in our research and our teaching. However, for more important intellectual figures we have weighty educational reasons to cite them and include them in our courses. This leads to a tension, which we attempt to resolve by proposing ways to accommodate the moral reasons against admiring immoral intellectuals and the intellectual reasons to include them in our courses, though we conclude on the pessimistic note that this tension may not be entirely resolvable.
Sports fans sometimes feel shame for their team's moral transgressions. In this paper, we investigate this phenomenon. We offer an account of sports fan shame in terms of collective shame. We argue that this account is superior to accounts of sports fan shame in terms of shame for others and shame for oneself. We then argue that accepting the role that sports stars play in bringing about the collective shame amongst their fans provides a new way of justifying the claim that sport stars are subject to special moral obligations.
Is it appropriate to honor artists who have created great works but who have also acted immorally? In this article, after arguing that honoring involves identifying a person as someone we ought to admire, we present three moral reasons against honoring immoral artists. First, we argue that honoring can serve to condone their behavior, through the mediums of emotional prioritization and exemplar identification. Second, we argue that honoring immoral artists can generate undue epistemic credibility for the artists, which can lead to an indirect form of testimonial injustice for the artists' victims. Third, we argue, building on the first two reasons, that honoring immoral artists can also serve to silence their victims. We end by considering how we might respond to these reasons.
In democracies around the world, political forces calling for a rollback of globalization are on the ascendancy. Longstanding consensus about the benefits of free trade and human rights and around the legitimacy of the international institutions enabling these goods has been questioned by successful populist politicians on both sides of the ideological spectrum. Some even claim that the entire liberal international order has become contested, perhaps as never before (Lake et al., 2021). An emerging critique of multilateralism argues that states and peoples should not be shackled by international legal arrangements and international law, but rather, that states should “do it alone.” The picture painted is one where state sovereignty is constrained and undermined by international institutions. This view implies that there is necessarily a tradeoff between multilateralism and state autonomy.
We usually examine our considered intuitions regarding inequality, including health inequality, by comparing populations of the same size. Likewise, the standard measures of inequality and its badness have been developed on the basis of only such comparisons. Real world policies to mitigate inequalities, however, will most often also affect the size of a population. For example, many health policies are very likely to prevent deaths and affect procreation decisions. Population control policies, such as China’s one-child policy, trivially affect population size. In addition, if we are interested in measuring the development of global inequality during the last thirty years or so, we have to take into account the great population expansion in countries such as India and China. Hence, we need to consider how to extend measures of inequality to different number cases, that is, how to take into account the complication that population numbers are often not equal between the compared alternatives. Moreover, examining different number case is a fruitful way of probing our ideas about egalitarian concerns and will reveal as yet unnoticed complexities and problems in our current conceptualization of the value of equality, or so I’ll argue.