Katz’s Semantic Theory (Part I)

(This is intended to be the first in a series of posts in which I work my way through Semantic Theory by Jerrold Katz)

Through a somewhat meandering intellectual journey that I undertook when I probably should have been writing, I found myself reading the late Jerrold J Katz’s 1972 book entitled Semantic Theory. While I began that book with a certain amount of cynicism—I think I’ve been disappointed by virtually every book that tries to develop a theory of semantics—that cynicism evaporated very quickly. It evaporated as soon it became obvious that the theory that Katz intended to develop was radically different from the theory of semantics that contemporary linguists assume and that the source of that radical difference was that Katz shared the core assumptions of generative grammar.

That last sentence, or rather its implication, may be a bit inflammatory, but I think it’s justified, for reasons that Katz elucidates.

In his preface, Katz gives something of a historical narrative of linguistics and logic in the first half of the 20th century. He picks this time frame because of what he views as an unfortunate schism that occurred in those years. His basic story is as follows. Throughout most of their history, logic and linguistics were united by their interest in what Katz calls “the classical problem of logical form,” which is clear when you consider, for instance, that the notion of subject and predicate comes from Aristotle’s logical treatise On Interpretation, or that one of the leading logical works from the renaissance to the 20th century, The Port Royal Logic, was written and published along with the Port Royal Grammar. In the 20th century, though, something happened and the two fields went their separate ways, away from the classical problem.

By Katz’s estimation, there are three factors that led to the schism: (i) The professionalization of the fields, (ii) the difficulty of the classical problem, and (iii) the dominance of empiricism in the fields. Since the story of linguistics in this period has been covered quite a bit, Katz doesn’t waste much time on it, and neither will I. The story of logic, however, interests Katz (more of a philosopher than a linguist) a great deal, and I think is useful in understanding current theories of semantics. Logicians in the early 20th century, influenced by the Katz’s three factors, abandoned the problem of logic form and sought out “manageable problems.” The problem, or perhaps program is the better word for it, that they landed on was the development of artificial languages with which to represent thought. These artificial languages, unlike natural language, wore their logical form on their sleeves, to borrow Katz’s formulation.

In order to formulate an artificial logical language, Quine—one of the Katz’s chief villains—sought to identify and highlight the “logical particles” of natural language as distinct from the extra-logical vocabulary. The logical particles (e.g., and, or, not, if—then) are those that have inferential powers, while the extra-logical words (e.g., dog, bachelor, Moira, lamp) are those that have only referential powers. This seems fairly intuitive, but Katz argues that there is no non-arbitrary way of dividing logical vocabulary from extralogical vocabulary. This is certainly an odd assertion. I mean, it’s pretty obvious that and is a logical word and dog isn’t, right? While it might be a valid intuition that these are different sorts of words, what Katz argues is that the set of words that have inferential powers is much larger than what we might call the logical particles.

To show this, Katz walks us through a possible method for identifying logical particles and demonstrates that this method cannot actually rule out any word as a logical particle. The method starts by examining a valid inference such as (1)–(3).

 (1) All terriers are dogs.
  (2) All dogs are animals.
  (3) Hence, all terriers are animals.

We can see that (1)–(3) remains valid regardless of the meaning of dogs, animals, and terriers; that is, we could replace the tokens of those words with tokens of virtually any other nouns and we’d still have a valid inference. By the same token, though, the validity of (1)–(3) depends on the meaning of all, are, and hence. So, we remove from our list of candidates for logical particles, the words that can be factored out of such valid inferences. Katz argues that, while this method gives the expected results for classical syllogisms and perhaps some other logical inferences, things get messy when we look at the full range of valid inferences

Katz presents (4) and (5) as a valid inference, but argues that the method of factoring we applied to (1)–(3) gives different results here.

 (2) Socrates is a man.
 (3) Hence, Socrates is male.

We can factor out Socrates here, but not man or male. The inference from (4) to (5) seems to depend on the meaning of the latter two words. If we follow our methodology, then we have to add male and man to our logical particles, because they seem to have inferential powers. With a few moments of thought, we can see that this leads to a situation where there is no logical/extra-logical distinction, because every word is a logical particle. Thus Quine’s program is doomed to failure.

As anyone who has leaned any formal logic knows, though, Quine’s program became the orthodoxy. And, in fact, his conception of logic is, in many ways, the basis for semantics as practiced by contemporary generative grammarians. Katz identifies the work of George Lakoff and that of Donald Davidson as early attempts to apply Quinean logic to language, and it continues to today.

As something of an aside, formal semanticists, seem to take as given the assertion that there is a privileged class of logical particles, and try to analyze a portion of the vocabulary that lies outside of that class so that it can be expressed using the logical particles and some simple atomic extra-logical “words.” what belongs to that analyzable portion of vocabulary is not well defined; I know that know, and should are in that portion and I know that dog and wallet are outside of that portion, but I can’t really get much more specific than that.

What’s stranger is that even some of those words that correspond to logical particles are up for analysis. And, triggers some implicatures which are often analyzed using the Quinean tools. The meaning of if—then, is also up for debate. I almost wrote a paper as part of my PhD on conditionals and the one thing that the semantic literature seems to agree on is that the meaning of if—then is not the material conditional (→). Being a naive syntactician, with no understanding of the history of logic, I basically took formal logic as gospel. It never occurred to me that the logician’s conception of conditional statements could be flawed.

Of course, if Katz is correct, then logics built on Quine’s logical/extra-logical distinction are the proverbial houses built on sand. And if I’m correct that formal semantics is built on Quinean logic, then formal semantics is a proverbial house built on a house built on sand. End of aside.

Having argued that the empiricist theories of logic such as those of Quine, Frege, and Carnap are unsuited for inclusion in a rationalist theory of language such as generative grammar, Katz moves on to the next task, the one that occupies the remainder of his book: the task of constructing a rationalist and scientific theory of semantics. According to Katz, this task was viewed by the philosophers of his day as an impossibility, and I don’t know if much has changed.

In fact, it seems to me that among semanticists and a number of generative syntacticians, there is a strong hostility towards rationalist conceptions of semantics as put forth by Katz (and also Chomsky). As an illustrative anecdote, I recall once I was talking with an established linguist, and I expressed some skepticism towards modern formal semantics. When I suggested that a more rationalist, intensionalist theory of semantics might be fruitful, they responded that, while I might be right, if I decided to pursue that line of research, I would never be hired as a semanticist. Luckily for me, of course, I’m a syntactician, but that’s still a rather chilling thing to hear. End of second aside.

Katz concludes his preface by putting his program in context, and outlining the structure of the book. I won’t bore you with the details, but only preview chapter 1 “The scope of semantics,” wherein Katz considers the question what is meaning?, and gives a shockingly sensible answer: That’s a complex question, we’ll need to answer it scientifically.

Subscribe
Notify of
guest

7 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

[…] leads us immediately to the question of what it means to be a paraphrase, or a synonym. I find this a more interesting theoretical question, than any of those raised above, but I’m willing to listen if someone thinks […]

1)How is generative linguistics “rationalist” (rather than “empiricist”, I presume – or what even is the distinction meant)?
2)I think Katz fails to distinguish generalizable and non-generalizable inferences. (1)-(3) is generalizable, you can freely substitute predicates in there. (4)-(5) is not, it requires additional lexical knowledge that “All men are male” (also known as hyponymic relation, which is bound to hold between some predicates if you have loads and loads of them for our real-world experience, which is, in the end, pretty repetitive). And if it is false, as some versions of trans-language – where ‘male’ usually refers to biological sex whereas ‘man’ refers to gender identification – suggest, then (4)-(5) immediately becomes a false inference – Socrates may well be a trans-man, and it’s a fact of world knowledge that the actual Socrates was cis-man and thus male.
3)While “simple words” are “not-analyzable” in the sense that they are often not analyzed further into parts (though sometimes they are; and for many works in formal semantics omitting the part of lexical semantics where predicates like ‘bachelor’ are further decomposed is a simplification not to digress, not an ontological stance), they are part of analysis in the sense that they have a logical type, have valencies and so on.
4)Continuing the line of thought, the usual stance among those who mention this directly is that there are some lexical primitives, not that all, say, common nouns are primitives. I think Vierzbicka is the most explicit scholar in this regard, but her basic assumptions (not her suggestions for the actual set of primitives) seem to be somewhat shared.

1)How is generative linguistics “rationalist” (rather than “empiricist”, I presume – or what even is the distinction meant)?

“Rationalism”, here refers to the philosophical position that innate reason is the primary source of knowledge, while “empiricism” refers to the position that experience is the primary source of knowledge. So, the extreme position of rationalism is solipsism, while the extreme position of empiricism is the “blank slate” conception of human psychology. Of course, I don’t think anyone seriously holds either of those positions. Generative linguistics is rationalist in the sense that it assumes that most of our knowledge of language is innate. [Note: I saw your reply to this comment, but I had already written this so I left it.]

2)I think Katz fails to distinguish generalizable and non-generalizable inferences. (1)-(3) is generalizable, you can freely substitute predicates in there. (4)-(5) is not, it requires additional lexical knowledge that “All men are male” (also known as hyponymic relation, which is bound to hold between some predicates if you have loads and loads of them for our real-world experience, which is, in the end, pretty repetitive). And if it is false, as some versions of trans-language – where ‘male’ usually refers to biological sex whereas ‘man’ refers to gender identification – suggest, then (4)-(5) immediately becomes a false inference – Socrates may well be a trans-man, and it’s a fact of world knowledge that the actual Socrates was cis-man and thus male.

In the versions of trans-language that you’re referring to, then, the word “man”, “male”, or both words mean(s) something different, meaning that in that language, (4) and (5) may have different semantic representations than the ones I intended. It’s a simple fix though. Replace “male” in (5) with “human” or “person” or “adult”. Note the point still holds If Socrates never existed, or was a time-travelling robot or was actually three kids in a trenchcoat. the point is that the truth of (4) is related to the truth of (5) in a logical way, not a factual way.

3)While “simple words” are “not-analyzable” in the sense that they are often not analyzed further into parts (though sometimes they are; and for many works in formal semantics omitting the part of lexical semantics where predicates like ‘bachelor’ are further decomposed is a simplification not to digress, not an ontological stance), they are part of analysis in the sense that they have a logical type, have valencies and so on.

That precisely what I meant by “non-analyzable”—“primitive”. Every theory needs primitives, and those primitives have certain properties, but they don’t have subparts. By analogy, consider genetics, whose primitive are nucleotides. A geneticist doesn’t deny that each nucleotide is made up of atoms which are made up of protons, neutrons, and electrons, which are made up of strings (maybe?), but they studiously ignore those facts in their theorizing.

4)Continuing the line of thought, the usual stance among those who mention this directly is that there are some lexical primitives, not that all, say, common nouns are primitives. I think Vierzbicka is the most explicit scholar in this regard, but her basic assumptions (not her suggestions for the actual set of primitives) seem to be somewhat shared.

In my experience, you’re right. Most working formal semanticists would likely agree that common nouns and verbs have underlying structure, but tend to say that studying that structure belongs to some other field (psychology, or philosophy, or literary theory, etc) or that it would be nice to be able to study sadly it’s beyond our abilities right now. Maybe Wierzbicka (I think that’s the proper Polish spelling) is an exception, though