Some idle thoughts on the arguments for semantic externalism/internalism

This semester I’m teaching an intro semantics course for the first time and I decided to use Saeed’s Semantics as a textbook. Its seems like a good textbook; it gives a good survey of all the modern approaches to semantics—internalist, externalist, even so-called cognitive semantics—though the externalist bias is clear if you know what to look for. For instance, the text is quick to bring up the famous externalist thought experiments—Putnam’s robotic cats, Quine’s gavagai, etc—to undercut the internalist approaches, but doesn’t really seem to present the internalist critiques and counterarguments. So, I’ve been striving to correct that in my lectures.

While I was preparing my most recent lecture, something struck me. More precisely, I was suddenly able to put words to something that’s bothered me for a while about the whole debate: The externalist case is strongest for natural kinds, but the internalist case is strongest for human concepts. Putnam talks about cats and water, Kripke talks about tigers and gold, while Katz talks about bachelors and sometimes artifacts. This is not to say that the arguments on either side are unanswerable—Chomsky, I think has provided pretty good arguments that even, for natural kinds, our internal concepts are quite complicated, and there are many thorny issues for internalist approaches too—but they do have slightly different empirical bases, which no doubt inform their approach—if your theory can handle artifact concepts really well, you might be tempted to treat everything that way.

I don’t quite know what to make of this observation yet, but I wanted to write it down before I forgot about it.


There’s also a potential, but maybe half-baked, political implication to this observation. Natural kinds, are more or less constant in that, while they can be tamed and used by humans, we can’t really change them that much, and thinking that you can, say, turn lead into gold would mark you as a bit of a crackpot. Artifacts and social relations, on the other hand, are literally created by free human action. If you view the world with natural kinds at the center, you may be led to the view that the world has its own immutable laws that we can maybe harness, maybe adapt to, but never change.

If, on the other hand, your theory centers artifacts and social relations, then you might be led to the conclusion, as expressed by the late David Graeber, that “the ultimate hidden truth of the world is that it is something we make and could just as easily make differently.”

But, of course, I’m just speculating here.

On the general character of semantic theory (Part b)

(AKA Katz’s Semantic Theory (Part IIIb). This post discusses the second half of chapter 2 of Jerrold Katz’s 1972 opus. For my discussion of the first half of the chapter, go here.

(Note: This post was written in fits and starts, which is likely reflected in its style (or lack thereof). My apologies in advance)

The first half of chapter 2 was concerned with the broader theory of language, rather than a semantic theory. In the second half of the chapter, Katz begins his sketch of the theory of semantics. It’s at this point that I pick up my review.

4. The structure of the theory of language

In this section, Katz discusses universals, which he frames, following Chomsky, as constraints on grammars. Katz differs from Chomsky, though, in how he divvies up the universals—whereas Chomsky, in Aspects, distinguishes between formal and substantive universals, Katz adds a third type: organizational universals. These classifications are defined as follows:

Formal universals constrain the form of the rules in a grammar; substantive universals provide a theoretical vocabulary from which the constructs used to formulate the rules of particular grammars are drawn; organizational universals, of which there are two subtypes, componential organizational universals and systematic organizational universals, specify the interrelations among the rules and among the systems of rules within a grammar.

p30-31

Furthermore, formal, substantive, and componential universals cross-classify with phonological, syntactic, and semantic universals. This means that we can talk about substantive phonological universals, or componential semantic universals, and so on. So, for example, a phonological theory consists in a specification of the formal, substantive, and componential universals at the phonological level, and such a specification amounts to a definition of the phonological component of the language faculty. Systematic universals, then, specify how the components of the grammar are related to each other. With this discussion, Katz sets up his goals: to specify the formal, substantive, and componential universals at the semantic level. More precisely, he aims to develop the following:

(2.7) A scheme for semantic representation consisting of a theoretical vocabulary from which semantic constructs required in the formulation of particular semantic interpretations can be drawn

p33

(2.8) A specification for the form of the dictionary and a specification of the form of the rules that project semantic representations for complex syntactic constituents from the dictionary’s representations of the senses of their minimal syntactic parts.

p33

(2.9) A specification of the form of the semantic component, of the relation between the dictionary and the projection rules, and of the manner in which these rules apply in assigning semantic representations

p3

These three aspects of semantic theory, according to Katz, represent the substantive, formal, and componential universals, respectively. A theory that contains (2.7)-(2.9), and answers questions 1-15 (as listed here) would count as an adequate semantic theory.

5. Semantic theory’s model of a semantic component

So, Katz asks rhetorically, how could it be that semantic relations, such as analyticity, synonymy, or semantic similarity, be captured in the purely formal terms required by (2.7)-(2.9)? The answer is simple: semantic relations and properties are merely formal aspects of compositional meanings of expressions. This is a bold and controversial claim: Semantic properties/relations are formal properties/relations or, to put it more strongly semantic properties/relations are, in fact, syntactic properties/relations (where “syntactic” is used is a very broad sense). Of course, this claim is theoretical and rather coarse. Katz aims to make it empirical and fine.

So, what does Katz’s semantic theory consist of? At the broadest level, it consists of a dictionary and a set of projection rules. No surprise yet; it’s a computational theory, and any computational system consists of symbols and rules. The dictionary contains entries for every morpheme in a given language, where each entry is a collection of the senses of that morpheme. Finally he defines two “technical terms.” The first is a reading which refers “a semantic representation of a sense of a morpheme, word, phrase, clause, or sentence and which is further divided into lexical readings and derived readings. The second term is semantic marker which refers to “the semantic representation of one or another of the concepts that appear as parts of senses.” Katz then continues, identifying the limiting case of semantic marker: primitive semantic markers.

Here it’s worth making a careful analogy to syntactic theory. Semantic markers, as their name suggests, are analogous to phrase markers. Each are representations of constituency: a phrase marker represents the syntactic constituents of an expression while a semantic marker represents the conceptual constituents of a concept. In each theory there are base cases of the markers: morphemes in syntactic theory and aptly named primitive semantic markers. I must stress, of course that this is only an analogy, not an isomorphism. Morphemes are not mapped to primitive semantic markers, and vice versa. Just as a simple morpheme can be phonologically complex, it can also be semantically complex. Furthermore, as we’ll see shortly, while complex semantic markers are structured, there is no reason to expect them to be structured according to the principles of syntactic theory.

Before Katz gets to the actual nitty-gritty of formalizing these notions, he pauses to discuss ontology. He’s a philosopher, after all. Semantic markers are representations of concepts and propositions, but what are concepts and propositions? Well, we can be sure of some things that they are not: images, mental ideas, and particular thoughts which Katz groups together as what calls cognitions. Cognitions, for Katz, are concrete, meaning they can be individuated by who has them, when and where they occur, and so on. If you and I have the same thought (e.g., “Toronto is the capital of Ontario”) then we had different cognitions. Concepts and propositions, for Katz, are abstract objects and, therefore, independent of space and time, meaning they can’t be individuated by their nonexistent spatiotemporal properties. They can, however, be individuated by natural languages, which Katz also takes to be abstract objects, and, in fact, are individuated easily by speakers of natural languages. Since, in a formulation echoed recently by Paul Pietroski (at around 5:45), “senses are concepts and propositions connected with phonetic (or orthographic) objects in natural languages” and the goal of linguistic theory is to construct grammars that model that connection, the question of concept- and proposition-individuation is best answered by linguistic theory.1

But, Katz’s critics might argue, individuation of concepts and propositions is not definition of “concept” or “proposition”. True, Katz responds, but so what? If we needed to explicitly define the object of our study before we started studying it, we wouldn’t have any science. He uses the example of Maxwell’s theory of electromagnetism which accurately models the behaviour and structural properties of electromagnetic waves but does not furnish any definition of electromagnetism. So if we can come up with a theory that accurately models the behaviour and structural properties of concepts and propositions, why should we demand a definition?

We also can’t expect a definition of “semantic marker” or “reading” right out of the gate. In fact, Katz argues, one of the goals of semantic theory (2.7) is to come up with those definitions and we can’t expect to have a complete theory in order to develop that theory. Nevertheless, we can use some basic intuitions to come up with a preliminary sketch of what a reading and a semantic marker might look like. For instance, the everyday word/concept “chair”, has a common sense, which is composed of subconcepts and can be represented as the set of semantic markers in (2.15).

(2.15) (Object), (Physical), (Non-living), (Artifact),
       (Furniture), (Portable), (Something with legs),
       (Something with a back), (Something with a seat),
       (Seat for one)

Of course, this is just preliminary. Katz identifies a number of places for improvement. Each of the semantic markers is likely decomposable into simple markers. Even the concept represented by “(Object)” is likely decomposable.

Or, Katz continues, we can propose that semantic markers are ways of making semantic generalizations. Katz notes that when we consider how “chair” relates to words such as “hat,” “planet,” “car,” and “molecule” compared to words such as “truth,” “thought,” “togetherness,” and “feeling.” Obviously, these words all denote distinct concepts, but just as obviously, the two groupings contrast with each other. We can think of the semantic marker “(Object)” as the distinguishing factor in these groupings: the former is a group of objects, the latter a group of non-objects. So, semantic markers, like phonological features and grammatical categories, are expressions of natural classes.

Finally, Katz proposes a third way of thinking of semantic markers: “as symbols that mark the components of senses of expressions on which inferences from sentences containing the expressions depend.” (p41) For instance we can infer (2.19) from (2.18), but we can’t infer (2.27).

(2.18) There is a chair in the room.

(2.19) There is a physical object in the room.

(2.27) There is a woman in the room.

We can express this inference pattern by saying that every semantic marker that comprises the sense of “physical object” in (2.19) is contained in the sense of “chair” in (2.18), but that is not the case for “woman” in (2.27). The sense of “woman” in (2.27) contains semantic markers like “(Female)” which are not contained in the sense of chair in (2.18). Here Katz notes that his proposal that concepts like “chair” consist of markers is merely an extension of an observation by Frege that (2.28a,b,c) are together equivalent to (2.29)

(2.28)
(a) 2 is a positive number
(b) 2 is a whole number
(c) 2 is less than 10

(2.29) 2 is a positive whole number less than 10

For Frege, “positive number”, “whole number”, and “less than 10” are all properties of “2” and marks of “positive whole number less than 10”. Katz’s extension is to say that the concepts associated with simple expressions can have their own marks.

Next, Katz discusses the notions of derived and lexical readings which are, in a sense, the inputs and outputs, respectively, of the process of semantic composition. As the name suggests, lexical readings are what is stored in the dictionary. When a syntactic object hits the semantic component of the grammar, the first step is to replace the terminal nodes with their lexical readings. Derived readings are generated by applying projection rules to the first level of non-terminal nodes, and then the next level, and so on until the syntactic object is exhausted.

The process of deriving readings, Katz asserts, must be restrictive in the sense that the interpretation of a sentence is never the every permutation of the lexical readings of its component parts. For instance, suppose the adjective “light” and the noun “book” have N and M senses in their respective lexical readings. If our process for deriving readings were unrestrictive, we would expect “light book” to have N×M senses while, in fact, fewer are available. We can see this even when we restrict ourselves to 2 senses for “light”—“low in physical weight”, and “inconsequential”—and 2 senses for “book”—“a bound collection of paper” and “a work of literature”. Restricting ourselves this much we can see that the “light book” is 2-ways ambiguous, describing a bound collection of papers with a low weight, or a work of literature whose content is inconsequential, and not a work of literature with a low weight or an inconsequential bound collection of papers. Our semantic theory, then, must be such that the compositional process it proposes can appropriately restrict the class of derived readings for a given syntactic object.

To ensure this restrictiveness, Katz proposes that the senses that make up a dictionary entry are each paired with a selectional restriction. To illustrate this, he considers the adjective “handsome” which has three senses: when applied to a person or artifact it has the sense “beautiful with dignity”; when applied applied to an amount, it has the sense “moderately large”; when applied to conduct, it has the sense “gracious or generous”. So, for Katz, the dictionary entry for “handsome” is as in (2.30).

(2.30) "handsome";[+Adj,…];(Physical),(Object),(Beautiful),
                           (Dignified in appearance),
                           <(Human),(Artifact)>
                           (Gracious),(Generous),<(Conduct)>
                           (Moderately large),<(Amount)>

Here the semantic markers in angle brackets represent the markers that must be present in the senses that “handsome” is applied to.

This solution to the problem of selection may seem stipulative and ad hoc—I know it seems that way to me—but recall that this is an early chapter in a book published in 1972. If we compared it to the theories of syntax and phonology of the time, they might appear similarly unsatisfying. The difference between Katz’s theory and syntactic and phonological theories contemporary to Katz’s theory is that syntactic and phonological theories have since developed into more formalized and hopefully explanatory theories through the collaborative effort of many researchers, while Katz’s theory never gained the traction required to spur that level of collaboration.

Katz closes out this section, with a discussion of “semantic redundancy rules” and projection rules. Rather than discuss these, I move on to the final section of the chapter.

6. Preliminary definitions of some semantic properties and relations

Here Katz shows the utility of the theory that he has thus far sketched. That is, he looks at how the semantic properties and relations identified in chapter 1 can be defined in the terms introduced in this chapter. These theoretical definitions are guided by our common sense definitions, but Katz is careful to stress that they are not determined by them. So, for instance, two things are similar when they share some feature(s). Translating this into his theory, Katz gives the definition in (2.33) for semantic similarity.

(2.33) A constituent Ci is semantically similar to a constituent Cj on a sense just in case there is a reading of Ci and a reading of Cj which have a semantic marker in common. (they can be said to semantic similar with respect to the concept φ in case the shared semantic marker represents φ)

Note that we can convert this definition into a scalar notion, so we can talk about degrees of similarity in terms of the number of shared markers. Katz does this implicitly by defining semantic distinctness as sharing no markers and synonymy as sharing all features.

Similarity is a rather simple notion, and therefore has a simple definition; others requires some complexity. For instance, analytic statements like “Liars lie” are vacuous assertions due to the fact that the the meaning of the subject is contained in the meaning of the predicate. Here, Katz gives the definition one might expect, but it is clear that more needs to be said, as the notions of subject and predicate are more difficult to define. More on this in later chapters.

A more puzzling and less often remarked upon semantic relation is antonymy—the relation that holds of the word pairs in (2.46) and of the set of words in (2.47)

(2.46) bride/groom, aunt/uncle, cow/bull, girl/boy, doe/buck

(2.47) child/cub/puppy/kitten/cygnet

Katz notes that although antonymy is generally taken to be merely lexical, it actually projects to larger expressions (e.g., “our beloved old cow”/”our beloved old bull”), and is targeted by words like “either” as demonstrated by the fact that (2.49a) is meaningful while (2.49c) is anomalous.

(2.49)
a. John is well and Mary’s not sick either.
c. John is well and Mary’s not {well/foolish/poor/dead}

In order for antonymy to be given an adequate theoretical definition, then, it must be expressed formally. Katz does this by marking semantic markers that represent antonymy sets with a superscript. For instance, “brother” and “sister” would be represented as (Sibling)(M) and (Sibling)(F), respectively. Again, this is clearly stipulative and ad hoc but that is to be expected at this stage of a theory. In fact, Katz seems to have been revising his theory up to his death, with the colour incompatibility problem—the question of why the sentence “The dot is green and red” is contradictory—occupying a the focus of a 1998 paper of his and a section of his posthumous book. Even Katz’s ad hoc solution to the problem, though, is miles ahead of any solution that could possibly be given in current formal semantics—which is bases its definition of meaning on reference—because, to my knowledge, there is no way to account for antonymy in formal semantics. Indeed, the mere fact, that Katz is able to give any theoretical definition of antonymy, puts his theory well ahead of formal semantics.

Conclusion

Katz’s rough sketch of a semantic theory is already fairly successful in that its able to provide concrete definitions of many of the semantic notions that he identifies in the first chapter.2 I don’t believe this success is due to Katz’s ingenuity, but rather to the fact that he approached theory-building as the central activity in semantic inquiry, rather than an arcane peripheral curiosity. Since the theory building is central, it can occur in tandem with analysis of linguistic intuition.

In the next chapter, Katz responds to criticisms from his contemporaries. I’m not sure how enlightening this is for modern audiences, so I might skip it. We’ll see…


  1. ^ This argument, of course, leads pretty quickly to a classic problem inherent in the notion of abstract objects: the problem of how abstract objects can interact with the physical world. We could, of course, get around this by denying that concepts and propositions are abstract but then we need to explain how two different people could have the same thought at different times, in different places. I’m not sure which is the best choice and I’m not sure that linguistics (or any science) is up to the task of deciding between the two, so I’ll just proceed by going along with Katz’s realistic attitude about abstract objects, with the caveat that it might be wrong—a kind of methodological Platonism.
  2. ^ Katz does not give definitions for presupposition or question-answer pairs here, more on that in later chapters.

On the general character of semantic theory (Part a)

(AKA Katz’s Semantic Theory (Part IIIa). This post discusses chapter 2 of Jerrold Katz’s 1972 opus. For my discussion of chapter 1, go here.)

Having delineated in chapter 1 which questions a semantic theory ought to answer, Katz goes on in chapter 2 to sketch the sort of answer that a such a theory would give. He starts at a very high level, discussing the very notion of natural language and ends up with some of the formal details of the theory that he aims to develop.

Katz begins by reminding the reader that the questions of meaning—questions 1–15 below—are absolute questions. That is, they aren’t meant to be relativized to any particular language.

  1. What are synonymy and paraphrase?
  2. What are semantic similarity and semantic difference?
  3. What is antynomy?
  4. What is superordination?
  5. What are meaningfulness and semantic anomaly?
  6. What is semantic ambiguity?
  7. What is semantic redundancy?
  8. What is semantic truth (analyticity, metalinguistic truth, etc.)?
  9. What is semantic falsehood (contradiction, metalinguistic falsehood, etc.)?
  10. What is semantically undetermined truth or falsehood (e.g., syntheticity)?
  11. What is inconsistency?
  12. What is entailment?
  13. What is presupposition?
  14. What is a possible answer to a question?
  15. What is a self-answered question?

So, asking What is semantic truth in English? is kind of like asking What is a hiccup to a Canadian?. This, Katz acknowledges, makes a strong empirical claim, namely, that every natural language should exhibit the properties whose definitions are requested by questions 1–15.

As a syntactician, this claim made me think about what notions I would include the field of syntax as universal in this sense. Notions like sentence or phrase would certainly be there, and category would likely be there. Would subject, predicate, object, and the like be there? Would modification, or transformation? How about interrogative, declarative, imperative, etc? Notions like word/morpheme, or linear precedence, certainly were included in early versions of syntax, but more recently they tend to either be banished from the theory or dissolved into other notions.

I know of very few syntactitians who ask these questions. Perhaps this is because syntax has decidedly moved beyond the early stage in which Katz found semantics in 1972, but it still behooves us to keep those questions in mind, if only for the purposes of introducing syntax to students. Furthermore, perhaps if we keep these questions in mind, they can serve as a guide for research. Before embarking to answer a research question, the researcher would try to trace that question back to one of the basic questions to judge its likely fruitfulness. I would be curious to see how the papers in, say, LI would fare under such an analysis. But I digress.

Katz continues, asserting that a theory of linguistic meaning must be embedded in a larger theory of natural language, and in order to develop such a theory we must have some sense of what sort of thing a natural language might be. It is this question that occupies the first part of this chapter

1. Theories about the objective reality of language


The first thing Katz does here is distinguish between the two main competing conceptions of language (at least the main conceptions of his day): the traditional rationalist conception of language as “the internalized rules of grammar that constitute the fluency of its native speakers”, and the empiricist conception of language as “a vast stock of sound chunks classifiable into various phonological and syntactic categories” (p12). He opts for rationalism, citing the now familiar arguments against the empiricist stance. First off, we can’t identify a language L with the set S of all actual utterances of L because any competent speaker of L can easily construct an expression that lies outside of S. This is because although practical factors force every expression of a language to be of finite length, there is no theoretical limit to the length of an expression; no matter the length of an expression, there is always a grammatical way of lengthening it.

One could, Katz continues, expand S to be the set of all expressions that a speaker of L could utter without eliciting an odd response from a hearer. However, this amounts to defining L in terms of dispositions of a speech community, namely the dispositions to accept or reject strings of L. In practical reality, though, these dispositions can be wildly inconsistent depending on a variety of psychological and external factors, so if we want a consistent definition we need to clean up our notion of dispositions. Katz does so by “incorporating recursive mechanisms of sentence generation” (p15), or, as they’re more commonly referred to, generative grammars. And once we incorporate generative grammars, we have a rationalist conception of natural language.

Thus far, there’s nothing too surprising. Katz gives us a fairly standard argument in favour of the rationalist conception of language. But this is where Katz’s discussion gets a little strange; this is where he reveals his realist (in the philosophical sense) view of language. It is a mistake, he argues, to identify, say, English with the actual internalized rules in English-speakers’ brains. This would be like “identifying arithmetic with the concrete realizations of the mathematical rules in the heads of those who can compute using positive real numbers” (p16). As evidence for this claim, Katz cites “dead languages” like Sanskrit, which seems to exist (we can make true or false assertions of it) even though its rules are not actualized in any human’s brain the way that Hindi-Urdu’s rules are. Although he doesn’t say it explicitly here, Katz is arguing that languages are abstract entities, like platonic forms. In his own words: “A language is not itself subject to the fate of the mortals who speak it. It is some sort of abstract entity, whatever it is that this means.” (p16)

Katz further defends this view by identifying it with the standard scientific practice of idealization. So a natural languages like, say, Punjabi and a biological species like homo sapiens is an idealization in that they can’t be defined in terms of concrete examples. Similarly the notions of ideal gases, perfect vacuums, and massless strings are the idealizations of physics. He also cites Chomsky’s discussion in Aspects of the “ideal speaker-listener” and Rudolph Carnap who makes a similar observation, that one cannot directly investigate language but must do so by comparison to a constructed language.

Katz’s proposal and argument that languages are abstract entities strikes me as interesting but a bit confused. Katz’s argument from dead languages is compelling, and could perhaps be made even stronger. Consider for instance, reconstructed languages such as Proto Indo-European or Proto Algonquian. At best we know a scant few details about these languages, but we can say with some certainty that they were each spoken by some speech community. Do they exist in the same sense as Sanskrit does? I think the answer has to be yes, as the only difference between a reconstructed language and a dead language seems to be a written record of that language, and that is clearly not the difference between a language and a non-language.

The argument based on idealization, though. seems to be slightly confused. The comparison of a language with a species does seem to be apt, and might point towards his conclusion, but the comparison to ideal gases etc. I think suggests a different notion of idealization, the one that I’ve always taken Chomsky to be using. Under this sense, the idealized objects that scientists employ are not hypothesized to be real, but rather to be useful. I don’t believe even the most realist of scientists believes in the existence of frictionless planes. Scientists use these idealizations to reveal real, but non-apparent aspects of the world. In discussing the ideal speaker-listener, Chomsky was not suggesting that such a person exists, just that we ought to use this idealized person to help reveal a real aspect of the world, namely, the human language faculty.

2. Effability

In the next section Katz espouses what he calls the principle of effability, which he attributes to a number of earlier philosophers (Frege, Searle, and Tarski). The essence of the principle is roughly that if a proposition or thought is expressible in any language, it is expressible in every language. He spends a good chunk of text defending and sharpening his principle, but I’ll set that discussion aside here, and focus on why he proposes this principle. According to Katz, “effability alone offers a satisfactory basis for drawing the distinction between natural languages, on the one hand, and systems of animal communication and artificial languages, on the other” (p22). Despite this bold seeming claim, Katz is rather hesitant regarding his principle. He admits that it is rather inchoate and probably not yet up to any empirical task. But only part of his claim is about the viability of effability, the other claim is that no other property of natural language can distinguish it from other similar systems.

In particular, Katz takes aim at the properties that Chomsky tends to highlight as distinguishing factors for natural language: creativity, stimulus freedom, and appropriateness. Taking these one-by-one, he argues that none of them is unique to natural language. First, he considers creativity which he takes to be the ability of a speaker-listener to produce and understand indefinitely many sentences. This, Katz argues is a property of (a) any artificial language with recursive rules, and (b) certain animal communication systems, specifically bee communication. Next, Katz takes on stimulus freedom, which he argues means freedom from external stimuli, asserting that “[i]t cannot mean freedom from the control of internal stimuli as well.”1 This being the case, says Katz, stimulus freedom doesn’t make sense as a distinction. Also, he asserts that some animal behaviour displays such stimulus freedom. Finally, Katz argues that appropriateness is not part of linguistic competence—that it is extragrammatical, and also that some animal behaviour displays this property.

I take some issue with Katz’s critiques of each of the distinguishing properties individually, but I’ll set that aside for now to highlight a broader issue. Even if we take Katz’s critiques at face value, they still don’t refute Chomsky’s claim, because Chomsky’s Cain isn’t that each of the three properties distinguishes natural language, but that the conjunction of the three is what distinguishes natural language. That is, natural language is distinct from animal communication and artificial language in that it is creative, stimulus-free, and appropriate. So, for instance, even if a bee can produce novel dances, it does so in response to a stimulus. Artificial language might be creative, but it makes little sense to talk about stimulus freedom or appropriateness with respect to them. So Katz’s critiques don’t really have that much force.

At any rate, the principle of effability, while an interesting notion, doesn’t seem to be too crucial for Katz’s theory. The index of the book lists only one reference to effability outside this section. So, on to the next.

3. Competence and Performance

In the final table-setting section of this chapter, Katz takes up and defends Chomsky’s competence/performance distinction. His discussion, though, differs from most that I’ve encountered in that he uses a debate between Chomsky and Gilbert Harman, one of Chomsky and Katz’s empiricist contemporaries. Katz first clears a significant portion of underbrush in this debate in order to get to what he takes to be the crux of the issue: the proposal that linguistic competence consists in the unconscious knowledge of general principles. He summarizes Harman’s issue, which seems to revolve around the notion of grammatical transformations, as follows.

[G]iven that we can say that speakers of a language know that certain sentences are ungrammatical, certain ones ambiguous, certain ones related in certain ways to others, and so on, what licenses us to go further and say that speakers know (tacitly) the linguistic principles whose formalization in the grammar explain the noted ungrammaticality, ambiguity, sentential relations and the like?

(p28)

This challenge, Katz seems to argue, is not based on the empiricist/rationalist debate in epistemology, but rather on the realist/fictionalist argument in the philosophy of science.2 Harman is saying that a transformational grammar is maybe a good model of a speaker-listener of a given language, but it’s just that, a model. Katz responds, with the help of a quote from his erstwhile co-author, Jerry Fodor, that the only sensible conclusion to be drawn from the empirical accuracy of a scientific theory is that the theory is a true description of reality, at least insofar as it is empirically accurate. There is, of course much more to say about this, but I’ll leave it there.

Thus, Katz sets up his conception of language in order to be able to sketch a theory of semantics within a theory of language. In my next post I will take up the details of that sketch.


  1. ^ Katz cites Cartesian Linguistics for Chomsky’s distinguishing factors, and it’s likely that CL doesn’t discuss stimulus-freedom too extensively. In more recent discussion, though, Chomsky does include internal stimuli in the property of stimulus freedom, so, it’s not clear that Katz’s critique here still holds.
  2. ^ I suspect that there is no strong demarcation between epistemology and philosophy of science, but I can’t say with any confidence one way or the other.

The Scope of Semantics

(AKA Katz’s Semantic Theory (Part II). This post discusses chapter 1 of Jerrold Katz’s 1972 opus. For my discussion of the preface, go here.)

If you’ve taken a semantics course in the past decade or two, or read an introductory textbook on the topic published in that time span, you probably encountered, likely at the outset, the question What is meaning? followed almost immediately with a fairly pat answer. In my experience, the answer given to that question was reference1—the meaning of an expression, say dog, is the set of things in the world that that expression refers to, the set of all dogs in this case. Now, I can’t exactly recall my reaction the first time a teacher presented that as an answer to the question of meaning. I might have been wholly unimpressed, or I might have had my mind blown, that way that an impressionable young mind can be blown by someone giving a pat, confident answer to a deep question. Either way, I know that every time I’ve heard that answer2 to the question of meaning since, it’s become less impressive, to the point of being slightly offensive. At best, a pat answer is incomplete; at worst, it’s flat wrong.

Of course, I never really had a better answer to the question of meaning, and most of the other answers on offer seemed much worse. I couldn’t shake the unease I had with reference as an answer, but I couldn’t fully articulate that unease. Which is why I was very quickly drawn into Semantic Theory—Katz pinpoints and articulates the source of that unease on page 3 of the book:

The misconception, it seems to me, lies in the supposition that the question “What is meaning” can be answered in a direct and straightforward way. The question is generally treated as if it were on par with questions like “What is the capital of France?” to which the direct and straightforward answer “Paris” can be given. It is supposed that an answer can be given of the form “Meaning is this or that.” But the question “What is meaning?” does not admit of a direct “this or that” answer; it’s answers is instead a whole theory [emphasis added]. It is not a question like “What is the capital of France?” “When did Einstein retire?” “Where is Tasmania?”because it is not merely a request for an isolated fact, a request which can be answered simply and directly. Rather it is a theoretical question, like “What is matter?” “What is electricity?” “What is light?”

(Katz 1972, p3)

Imagine if, instead of developing theories of matter, electricity, and light, the early physicists had been satisfied with giving a simple answer like Matter is anything you can touch and feel. We wouldn’t have a science of physics, or chemistry. We likely wouldn’t have any science as we know it.

Katz goes on to acknowledge that, if one were to ask a physicist what electricity is, they might give a simple answer, but notes that such an answer would be a highly condensed version of the theory of electromagnetism that has been developed over centuries of inquiry. Similarly, if you were to ask a phonologist what a syllable is, or what pronunciation is, or if you asked a syntactician what a sentence is, or what grammar is, you might get a similar condensed answer with a several big caveats. You certainly wouldn’t get a simple straightforward answer. In fact, one of the first tasks in any introduction to linguistics is to disabuse students of any simple answers that they may have internalized, and even to disabuse them of the notion that simple answers to such questions even exist.

This seems to leave us in a bit of a bind. If we don’t know what meaning is, how can we study it? Katz’s response: the same way we did with chemistry, biology, phonology, etc.—We identify a set of phenomena that are definitely under the umbrella of meaning, and go from there. Not to disappoint, Katz identifies 15 such phenomena which he frames as subquestions to the meaning question:

  1. What are synonymy and paraphrase?
  2. What are semantic similarity and semantic difference?
  3. What is antynomy?
  4. What is superordination?
  5. What are meaningfulness and semantic anomaly?
  6. What is semantic ambiguity?
  7. What is semantic redundancy?
  8. What is semantic truth (analyticity, metalinguistic truth, etc.)?
  9. What is semantic falsehood (contradiction, metalinguistic falsehood, etc.)?
  10. What is semantically undetermined truth or falsehood (e.g., syntheticity)?
  11. What is inconsistency?
  12. What is entailment?
  13. What is presupposition?
  14. What is a possible answer to a question?
  15. What is a self-answered question?

A formidable list to be sure, but, as far as I can tell, modern formal semantics only cares about 11–143. Katz expands on each of these with representative examples. I won’t go into those examples, but they all are based on intuitions that a person would have about linguistic meaning. If one takes these as the leading questions of semantic theory, Katz argues, then the simple answers to the meaning question lose their appeal, as they do not answer the subquestions 1–15, or at least cannot do so without a complex semantic theory to supplement them.

Furthermore, Katz points out that the debates between the competing simple answers all use arguments based on the phenomena that 1–15 as about. Take, for instance, the best known critique of the referentialist answer. If we assume that meaning=reference, then any two expressions that have the same referent, must be synonymous. Gottlob Frege, the godfather of formal semantics, argued that there were expressions which had different meanings but had the same referent, the classic example of which is the morning star and the evening star. The two expressions have different meanings (they differ as to when the star appears in the sky), however they refer to the same object (the planet Venus). And once you start to think about it you can come up with a seeming infinity of such examples.

Katz goes on to show that critiques of other simple answers to the meaning question are based on what hes call “strong pretheoretical intuitions,” all of which raise at least one of questions 1–15. His point here seems to be that we can’t divorce our semantic theory from pretheoretical intuitions such as the ones the form the basis of 1–15, so why not just embrace it? Why not throw away the “leading principles” and just try to build a theory that answers 1–15?

Katz closes the chapter by discussing skepticism with regards to meaning. It’s hard to honestly maintain skepticism, he argues, when we can marshal an extensive body of evidence that meaning exists. That body of evidence starts with an explication of 1–15, but likely extends beyond that. It is even harder to honestly maintain skepticism if we can build a theory that shows the regular and law-like behaviour of the evidence marshaled. Taking a suggestion from Quine (who played a major role in the preface), Katz compares the situation that he finds himself in to that which ancient astronomers found themselves in:

Astronomy found its answer to “What are planets?” by constructing a theory that explained planetary motion on the assumption that planets are physical objects that obey standard mechanical laws. In the same spirit, once we construct a theory that can successfully explain a reasonably large portion of semantic phenomena, we can base our answer to “What is meaning?” on what the theory had to assume meaning was in order to provide its explanations.

(Katz 1972, p10

Semantics, as it is taught and studied today, is commonly considered by non-semanticists to be the most arcane and opaque subfield of linguistics. It’s not clear what is more obscure, the questions that semanticists ask or the formalism that they use to answer those questions. I often wonder if there is something endemic to questions of meaning that make them seem arcane to many, or if it is a failing in the standard answer that leads to this feeling. This chapter of Katz’s book, for me, rules out the former. The questions in 1–15 are far from arcane, or, at least, they’re no more arcane than the questions that occupy the other subfields of linguistics. Maybe if we took Katz’s view of semantics, fewer students would run screaming (or grumbling, or yawning) from semantics classes.

In the next chapter, entitled “On the general character of semantic theory” Katz begins constructing his theory.


Footnotes (the links might not work, sorry)

  1. ^ I learned my semantics in a generative department where reference was the answer. Other departments might have had another answer.
  2. ^ and sometimes I’ve even given that answer as a teacher.
  3. ^ Entailment and inconsistency are the key phenomena. Presuppositions are useful as diagnostics. Questions have only recently gained currency lately it seems.

Katz’s Semantic Theory (Part I)

(This is intended to be the first in a series of posts in which I work my way through Semantic Theory by Jerrold Katz)

Through a somewhat meandering intellectual journey that I undertook when I probably should have been writing, I found myself reading the late Jerrold J Katz’s 1972 book entitled Semantic Theory. While I began that book with a certain amount of cynicism—I think I’ve been disappointed by virtually every book that tries to develop a theory of semantics—that cynicism evaporated very quickly. It evaporated as soon it became obvious that the theory that Katz intended to develop was radically different from the theory of semantics that contemporary linguists assume and that the source of that radical difference was that Katz shared the core assumptions of generative grammar.

That last sentence, or rather its implication, may be a bit inflammatory, but I think it’s justified, for reasons that Katz elucidates.

In his preface, Katz gives something of a historical narrative of linguistics and logic in the first half of the 20th century. He picks this time frame because of what he views as an unfortunate schism that occurred in those years. His basic story is as follows. Throughout most of their history, logic and linguistics were united by their interest in what Katz calls “the classical problem of logical form,” which is clear when you consider, for instance, that the notion of subject and predicate comes from Aristotle’s logical treatise On Interpretation, or that one of the leading logical works from the renaissance to the 20th century, The Port Royal Logic, was written and published along with the Port Royal Grammar. In the 20th century, though, something happened and the two fields went their separate ways, away from the classical problem.

By Katz’s estimation, there are three factors that led to the schism: (i) The professionalization of the fields, (ii) the difficulty of the classical problem, and (iii) the dominance of empiricism in the fields. Since the story of linguistics in this period has been covered quite a bit, Katz doesn’t waste much time on it, and neither will I. The story of logic, however, interests Katz (more of a philosopher than a linguist) a great deal, and I think is useful in understanding current theories of semantics. Logicians in the early 20th century, influenced by the Katz’s three factors, abandoned the problem of logic form and sought out “manageable problems.” The problem, or perhaps program is the better word for it, that they landed on was the development of artificial languages with which to represent thought. These artificial languages, unlike natural language, wore their logical form on their sleeves, to borrow Katz’s formulation.

In order to formulate an artificial logical language, Quine—one of the Katz’s chief villains—sought to identify and highlight the “logical particles” of natural language as distinct from the extra-logical vocabulary. The logical particles (e.g., and, or, not, if—then) are those that have inferential powers, while the extra-logical words (e.g., dog, bachelor, Moira, lamp) are those that have only referential powers. This seems fairly intuitive, but Katz argues that there is no non-arbitrary way of dividing logical vocabulary from extralogical vocabulary. This is certainly an odd assertion. I mean, it’s pretty obvious that and is a logical word and dog isn’t, right? While it might be a valid intuition that these are different sorts of words, what Katz argues is that the set of words that have inferential powers is much larger than what we might call the logical particles.

To show this, Katz walks us through a possible method for identifying logical particles and demonstrates that this method cannot actually rule out any word as a logical particle. The method starts by examining a valid inference such as (1)–(3).

 (1) All terriers are dogs.
  (2) All dogs are animals.
  (3) Hence, all terriers are animals.

We can see that (1)–(3) remains valid regardless of the meaning of dogs, animals, and terriers; that is, we could replace the tokens of those words with tokens of virtually any other nouns and we’d still have a valid inference. By the same token, though, the validity of (1)–(3) depends on the meaning of all, are, and hence. So, we remove from our list of candidates for logical particles, the words that can be factored out of such valid inferences. Katz argues that, while this method gives the expected results for classical syllogisms and perhaps some other logical inferences, things get messy when we look at the full range of valid inferences

Katz presents (4) and (5) as a valid inference, but argues that the method of factoring we applied to (1)–(3) gives different results here.

 (2) Socrates is a man.
 (3) Hence, Socrates is male.

We can factor out Socrates here, but not man or male. The inference from (4) to (5) seems to depend on the meaning of the latter two words. If we follow our methodology, then we have to add male and man to our logical particles, because they seem to have inferential powers. With a few moments of thought, we can see that this leads to a situation where there is no logical/extra-logical distinction, because every word is a logical particle. Thus Quine’s program is doomed to failure.

As anyone who has leaned any formal logic knows, though, Quine’s program became the orthodoxy. And, in fact, his conception of logic is, in many ways, the basis for semantics as practiced by contemporary generative grammarians. Katz identifies the work of George Lakoff and that of Donald Davidson as early attempts to apply Quinean logic to language, and it continues to today.

As something of an aside, formal semanticists, seem to take as given the assertion that there is a privileged class of logical particles, and try to analyze a portion of the vocabulary that lies outside of that class so that it can be expressed using the logical particles and some simple atomic extra-logical “words.” what belongs to that analyzable portion of vocabulary is not well defined; I know that know, and should are in that portion and I know that dog and wallet are outside of that portion, but I can’t really get much more specific than that.

What’s stranger is that even some of those words that correspond to logical particles are up for analysis. And, triggers some implicatures which are often analyzed using the Quinean tools. The meaning of if—then, is also up for debate. I almost wrote a paper as part of my PhD on conditionals and the one thing that the semantic literature seems to agree on is that the meaning of if—then is not the material conditional (→). Being a naive syntactician, with no understanding of the history of logic, I basically took formal logic as gospel. It never occurred to me that the logician’s conception of conditional statements could be flawed.

Of course, if Katz is correct, then logics built on Quine’s logical/extra-logical distinction are the proverbial houses built on sand. And if I’m correct that formal semantics is built on Quinean logic, then formal semantics is a proverbial house built on a house built on sand. End of aside.

Having argued that the empiricist theories of logic such as those of Quine, Frege, and Carnap are unsuited for inclusion in a rationalist theory of language such as generative grammar, Katz moves on to the next task, the one that occupies the remainder of his book: the task of constructing a rationalist and scientific theory of semantics. According to Katz, this task was viewed by the philosophers of his day as an impossibility, and I don’t know if much has changed.

In fact, it seems to me that among semanticists and a number of generative syntacticians, there is a strong hostility towards rationalist conceptions of semantics as put forth by Katz (and also Chomsky). As an illustrative anecdote, I recall once I was talking with an established linguist, and I expressed some skepticism towards modern formal semantics. When I suggested that a more rationalist, intensionalist theory of semantics might be fruitful, they responded that, while I might be right, if I decided to pursue that line of research, I would never be hired as a semanticist. Luckily for me, of course, I’m a syntactician, but that’s still a rather chilling thing to hear. End of second aside.

Katz concludes his preface by putting his program in context, and outlining the structure of the book. I won’t bore you with the details, but only preview chapter 1 “The scope of semantics,” wherein Katz considers the question what is meaning?, and gives a shockingly sensible answer: That’s a complex question, we’ll need to answer it scientifically.