The standard relation of logical consequence allows for non-standard interpretations of logical constants, as was shown early on by Carnap. But then how can we learn the interpretations of logical constants, if not from the rules which govern their use? Answers in the literature have mostly consisted in devising clever rule formats going beyond the familiar what follows from what. A more conservative answer is possible. We may be able to learn the correct interpretations from the standard rules, because the space of possible interpretations is a priori restricted by universal semantic principles. We show that this is indeed the case. The principles are familiar from modern formal semantics: compositionality, supplemented, for quantifiers, with topic-neutrality.
The standard semantic definition of consequence with respect to a selected set X of symbols, in terms of truth preservation under replacement (Bolzano) or reinterpretation (Tarski) of symbols outside X, yields a function mapping X to a consequence relation . We investigate a function going in the other direction, thus extracting the constants of a given consequence relation, and we show that this function (a) retrieves the usual logical constants from the usual logical consequence relations, and (b) is an inverse to-more precisely, forms a Galois connection with-the Bolzano-Tarski function.
The shift of interest in logic from just reasoning to all forms of information flow has considerably widened the scope of the discipline, as amply illustrated in Johan van Benthem's recent book Logical Dynamics of Information and Interaction. But how much does this change when it comes to the study of traditional logical notions such as logical consequence? We propose a systematic comparison between classical consequence, explicated in terms of truth preservation, and a dynamic notion of consequence, explicated in terms of information flow. After a brief overview of logical consequence relations and the distinctive features of classical consequence, we define classical and dynamic consequence over abstract information frames. We study the properties of information under which the two notions prove to be equivalent, both in the abstract setting of information frames and in the concrete setting of Public Announcement Logic. The main lesson is that dynamic consequence diverges from classical consequence when information is not persistent, which is in particular the case of epistemic information about what we do not yet know. We end by comparing our results with recent work by Rothschild and Yalcin on the conditions under which the dynamics of information updates can be classically represented. We show that classicality for consequence is strictly less demanding than classicality for updates. Johan van Benthem's recent book Logical Dynamics of Information and Interaction [8] can be seen as a passionate plea for a radically new view of logic. To be sure, the book is not a philosophical discussion of what logic is but rather an impressive series of illustrations of what logic can be, with presentations of numerous logical languages and a wealth of meta-logical results about them. The view is called simply Logical Dynamics, and contrasted with more traditional views of logic, and also with the earlier view from e.g. [5], now called Pluralism, in which logic was seen as the study of consequence relations. According to Logical Dynamics, logic is not only about reasoning, about what follows from what, but about all aspects of information flow among rational agents. Not just proof and inference, but observations, questions, announcements, communication, plans, strategies, etc. are first-class citizens in the land of Logic. And not only the output of these activities belong to logic, but also the processes leading up to it. This is a fascinating and inspiring view of logic. But how different is it from a more standard view? In particular, what does it change for the analysis of logical consequence, which had been the focus of traditional logical enquiry? This paper attempts some answers to the latter question, with a view to get clearer about the former.
The goal of the study of dependence and independence in logic is to establish a basic theory of dependence and independence phenomena underlying seemingly unrelated subjects such as game theory, random variables, database theory, scientific experiments, and probably many others. The monograph Dependence Logic (J. Väänänen, Cambridge UP, 2007) stimulated an avalanche of new results which have demonstrated remarkable convergence in this area. The concepts of (in)dependence in the different fields of science have surprising similarity and a common logic is starting to emerge. This special issue will give an overview of the state of the art of this new field.
This article is concerned with the principle of compositionality, i.e. the principle that the meaning of a complex expression is a function of the meanings of its parts and its mode of composition. After a brief historical background, a formal algebraic framework for syntax and semantics is presented. In this framework, both syntactic operations and semantic functions are (normally) partial. Using 20 the framework, the basic idea of compositionality is given a precise statement, and several variants, both weaker and stronger, as well as related properties, are distinguished. Several arguments for compositionality are discussed, and the standard arguments are found inconclusive. Also, several arguments against compositionality, and for the claim that it is a trivial property, are discussed, 25 and are found to be flawed. Finally, a number of real or apparent problems for compositionality are considered, and some solutions are proposed.
Starting from the familiar observation that no straightforward treatment of pure quotation can be compositional in the standard (homomorphism) sense, we introduce general compositionality, which can be described as compositionality that takes linguistic context into account. A formal notion of linguistic context type is developed, allowing the context type of a complex expression to be distinct from those of its constituents. We formulate natural conditions under which an ordinary meaning assignment can be non-trivially extended to one that is sensitive to context types and satisfies general compositionality. As our main example we work out a Fregean treatment of pure quotation, but we also indicate that the method applies to other kinds of context, e.g. intensional contexts.
This paper gives a uniform account of the meaning of generalizations with explicit exceptions that employ the prepositions “but”, “except”, and “except for”. Our theory is that exceptives depend on generalizations, which can but need not be universal, whose generality they limit, and some of whose exceptions they comment on. Every generalization intrinsically partitions its domain of applicability into regular cases, which are as it says to expect, and exceptions, which are not. A generalization’s exceptions are instances that falsify it if sufficiently prevalent. These two facts underpin the meaning of exceptives as combining a generality claim with an exception claim, giving correct truth conditions for the several ways the three exceptive prepositions are used, and significantly improving on existing semantic accounts in the literature. We support this by analyzing a wide range of examples. The analysis applies whether or not the phrase following the exceptive preposition is a DP, and whether or not the generalization is expressed with a quantifier. We further argue that these exceptive prepositions are synonymous, contrary to the widely held view that a difference in meaning explains their different syntactic distributions.
In a mathematical perspective, neighborhood models for modal logic are generalized quantifiers, parametrized to points in the domain of objects/worlds. We explore this analogy further, connecting generalized quantifier theory and modal neighborhood logic. In particular, we find interesting analogies between conservativity for linguistic quantifiers and the locality of modal logic, and between the role of invariances in both fields. Moreover, we present some new completeness results for modal neighborhood logics of linguistically motivated classes of generalized quantifiers, and raise new types of open problem. With the bridges established here, many further analogies might be explored between the two fields to mutual benefit.
The main difference between the classical Aristotelian square of oppo- sition and the modern one is not, as many seem to think, that the classical square has or presupposes existential import. The difference lies in the relations holding along the sides of the square: (sub)contrariety and sub- alternation in the classical case, inner negation and dual in the modern case. This is why the modern square, but not the classical one, applies to any (generalized) quantifier of the right type: all, no, more than three, all but five, most, at least two-thirds of the,... After stating these and other logical facts about quantified squares of opposition, we present a number of examples of such squares spanned by familiar quantifiers. Spe- cial attention is paid to possessive quantifiers, such Mary’s, at least two students’, etc., whose behavior under negation is more complex and in fact can be captured in a cube of opposition.
I investigate how the notion of compisitionality can be adapted to various kinds of semantics that take context dependence seriously.
This paper begins to explore what it means for an operator to be *constant*, roughly in the sense of meaning the same on every universe. We consider total as well as partial operators of various types, with special focus on generalized quantifiers.
This is a reply to H. Ben-Yami, 'Generalized quantifiers, and beyond' (this journal,2009), where he argues that standard GQ theory does not explain why natural language quantifiers have a restricted domain of quantification. I argue, on the other hand, that although GQ theory gives no deep explanation of this fact, it does give a sort of explanation, whereas Ben-Yami's suggested alternative is no improvement.
Bolzano’s definition of consequence in effect associates with each set X of symbols (in a given interpreted language) a consequence relation =>_X. We present this in a precise and abstract form, in particular studying minimal sets of symbols generating =>_X. Then we present a method for going in the other direction: extracting from an arbitrary consequence relation => its associated set C_=> of constants. We show that this returns the expected logical constants from familiar consequence relations, and that, restricting attention to sets of symbols satisfying a strong minimality condition, there is an isomorphism between the set of strongly minimal sets of symbols and the set of corresponding consequence relations (both ordered under inclusion).
I generalize Keenan's study of midpoints, generalized quantifiers equivalent to their own postcomplements (inner negations), focusing on the difference between a global and a local perspective of quantifiers.
We study the Aristotelian square of opposition from the modern perspective of generalized quantifiers. With a subtle but important change in the relations holding along the sides of the square, we show that it applies to all kinds of quantifiers, not just the four Aristotelian ones. We establish some of its logical properties, and give numerous examples of squares spanned by various quantifiers, in particular those expressed by possessive constructions.
Compositionality is currently discussed mainly in computer science, linguistics, and the philosophy of language. In computer science, it is seen as a desirable design principle. But in linguistics and especially in philosophy it is an "issue". Most theorists have strong opinions about it. Opinions, however, vary drastically: from the view that compositionality is trivial or empty, or that it is simply false for natural languages, to the idea that it plays an important role in explaining human linguistic competence. This situation is unsatisfactory, and may lead an outside observer to conclude that the debate is hopelessly confused.
I believe there is something in the charge of confusion, but that compositionality is nevertheless an idea that deserves serious consideration, for logical as well as natural languages. In this paper I try to illustrate why, without presupposing extensive background knowledge about the issue.
I attempt an explication of what it means for an operation across domains to be the same on all domains, an issue that (Feferman, S.: Logic, logics and logicism. Notre Dame J. Form. Log. 40, 31–54 (1999)) took to be central for a successful delimitation of the logical operations. Some properties that seem strongly related to sameness are examined, notably isomorphism invariance, and sameness under extensions of the domain. The conclusion is that although no precise criterion can satisfy all intuitions about sameness, combining the two properties just mentioned yields a reasonably robust and useful explication of sameness across domains.
Compositionality is a principle used in logic, philosophy, mathematics, linguistics, and computer science for assigning meanings to language expressions in a systematic manner following syntactic construction, thereby allowing for a perspicuous algebraic view of the syntax-semantics interface. Yet the status of the principle remains under debate, with positions ranging from compositionality always being achievable to its having genuine empirical content. This paper attempts to sort out some major issues in all this from a logical perspective. First, we stress the fundamental harmony between Compositionality and its apparent antipode of Contextuality that locates meaning in interaction with other linguistic expressions and in other settings than the actual one. Next, we discuss basic further desiderata in designing and adjudicating a compositional semantics for a given language in harmony with relevant contextual syntactic and semantic cues. In particular, in a series of concrete examples in the realm of logic, we point out the dangers of over-interpreting compositional solutions, the ubiquitous entanglement of assigning meanings and the key task of explaining given target inferences, and the dynamics of new language design, illustrating how even established compositional semantics can be rethought in a fruitful manner. Finally, we discuss some fresh perspectives from the realm of game semantics for natural and formal languages, the general setting for Samson Abramsky’s influential work on programming languages and process logics. We highlight outside-in coalgebraic perspectives on meanings as finite or infinitely unfolding behavior that might challenge and enrich current discussions of compositionality.
We investigate what possessives mean by examining a wide range of English examples, pre- and postnominal, quantified and non-quantified, to arrive at general, systematic truth conditions for them. In the process, we delineate a rich class of paradigmatic possessives having cross-linguistic interest, exploiting characteristic semantic properties. One is that all involve (implicit or explicit) quantification over possessed entities. Another is that this quantification always carries existential import, even when the quantifier over possessed entities itself doesn't. We show that this property, termed possessive existential import, is intimately related to the notion of narrowing \cite{barker95}. Narrowing has implications for compositionally analyzing possessives' meaning. We apply the proposed semantics to the issue of definiteness of possessives, negation of possessives, partitives and prenominal possessives, postnominal possessives and complements of relational nouns, freedom of the possessive relation, and the semantic relationship between pre- and postnominal possessives.