(This is intended to be the first in a series of posts in which I work my way through Semantic Theory by Jerrold Katz)
Through a somewhat meandering intellectual journey that I undertook when I probably should have been writing, I found myself reading the late Jerrold J Katz’s 1972 book entitled Semantic Theory. While I began that book with a certain amount of cynicism—I think I’ve been disappointed by virtually every book that tries to develop a theory of semantics—that cynicism evaporated very quickly. It evaporated as soon it became obvious that the theory that Katz intended to develop was radically different from the theory of semantics that contemporary linguists assume and that the source of that radical difference was that Katz shared the core assumptions of generative grammar.
That last sentence, or rather its implication, may be a bit inflammatory, but I think it’s justified, for reasons that Katz elucidates.
In his preface, Katz gives something of a historical narrative of linguistics and logic in the first half of the 20th century. He picks this time frame because of what he views as an unfortunate schism that occurred in those years. His basic story is as follows. Throughout most of their history, logic and linguistics were united by their interest in what Katz calls “the classical problem of logical form,” which is clear when you consider, for instance, that the notion of subject and predicate comes from Aristotle’s logical treatise On Interpretation, or that one of the leading logical works from the renaissance to the 20th century, The Port Royal Logic, was written and published along with the Port Royal Grammar. In the 20th century, though, something happened and the two fields went their separate ways, away from the classical problem.
By Katz’s estimation, there are three factors that led to the schism: (i) The professionalization of the fields, (ii) the difficulty of the classical problem, and (iii) the dominance of empiricism in the fields. Since the story of linguistics in this period has been covered quite a bit, Katz doesn’t waste much time on it, and neither will I. The story of logic, however, interests Katz (more of a philosopher than a linguist) a great deal, and I think is useful in understanding current theories of semantics. Logicians in the early 20th century, influenced by the Katz’s three factors, abandoned the problem of logic form and sought out “manageable problems.” The problem, or perhaps program is the better word for it, that they landed on was the development of artificial languages with which to represent thought. These artificial languages, unlike natural language, wore their logical form on their sleeves, to borrow Katz’s formulation.
In order to formulate an artificial logical language, Quine—one of the Katz’s chief villains—sought to identify and highlight the “logical particles” of natural language as distinct from the extra-logical vocabulary. The logical particles (e.g., and, or, not, if—then) are those that have inferential powers, while the extra-logical words (e.g., dog, bachelor, Moira, lamp) are those that have only referential powers. This seems fairly intuitive, but Katz argues that there is no non-arbitrary way of dividing logical vocabulary from extralogical vocabulary. This is certainly an odd assertion. I mean, it’s pretty obvious that and is a logical word and dog isn’t, right? While it might be a valid intuition that these are different sorts of words, what Katz argues is that the set of words that have inferential powers is much larger than what we might call the logical particles.
To show this, Katz walks us through a possible method for identifying logical particles and demonstrates that this method cannot actually rule out any word as a logical particle. The method starts by examining a valid inference such as (1)–(3).
(1) All terriers are dogs.
(2) All dogs are animals.
(3) Hence, all terriers are animals.
We can see that (1)–(3) remains valid regardless of the meaning of dogs, animals, and terriers; that is, we could replace the tokens of those words with tokens of virtually any other nouns and we’d still have a valid inference. By the same token, though, the validity of (1)–(3) depends on the meaning of all, are, and hence. So, we remove from our list of candidates for logical particles, the words that can be factored out of such valid inferences. Katz argues that, while this method gives the expected results for classical syllogisms and perhaps some other logical inferences, things get messy when we look at the full range of valid inferences
Katz presents (4) and (5) as a valid inference, but argues that the method of factoring we applied to (1)–(3) gives different results here.
(2) Socrates is a man.
(3) Hence, Socrates is male.
We can factor out Socrates here, but not man or male. The inference from (4) to (5) seems to depend on the meaning of the latter two words. If we follow our methodology, then we have to add male and man to our logical particles, because they seem to have inferential powers. With a few moments of thought, we can see that this leads to a situation where there is no logical/extra-logical distinction, because every word is a logical particle. Thus Quine’s program is doomed to failure.
As anyone who has leaned any formal logic knows, though, Quine’s program became the orthodoxy. And, in fact, his conception of logic is, in many ways, the basis for semantics as practiced by contemporary generative grammarians. Katz identifies the work of George Lakoff and that of Donald Davidson as early attempts to apply Quinean logic to language, and it continues to today.
As something of an aside, formal semanticists, seem to take as given the assertion that there is a privileged class of logical particles, and try to analyze a portion of the vocabulary that lies outside of that class so that it can be expressed using the logical particles and some simple atomic extra-logical “words.” what belongs to that analyzable portion of vocabulary is not well defined; I know that know, and should are in that portion and I know that dog and wallet are outside of that portion, but I can’t really get much more specific than that.
What’s stranger is that even some of those words that correspond to logical particles are up for analysis. And, triggers some implicatures which are often analyzed using the Quinean tools. The meaning of if—then, is also up for debate. I almost wrote a paper as part of my PhD on conditionals and the one thing that the semantic literature seems to agree on is that the meaning of if—then is not the material conditional (→). Being a naive syntactician, with no understanding of the history of logic, I basically took formal logic as gospel. It never occurred to me that the logician’s conception of conditional statements could be flawed.
Of course, if Katz is correct, then logics built on Quine’s logical/extra-logical distinction are the proverbial houses built on sand. And if I’m correct that formal semantics is built on Quinean logic, then formal semantics is a proverbial house built on a house built on sand. End of aside.
Having argued that the empiricist theories of logic such as those of Quine, Frege, and Carnap are unsuited for inclusion in a rationalist theory of language such as generative grammar, Katz moves on to the next task, the one that occupies the remainder of his book: the task of constructing a rationalist and scientific theory of semantics. According to Katz, this task was viewed by the philosophers of his day as an impossibility, and I don’t know if much has changed.
In fact, it seems to me that among semanticists and a number of generative syntacticians, there is a strong hostility towards rationalist conceptions of semantics as put forth by Katz (and also Chomsky). As an illustrative anecdote, I recall once I was talking with an established linguist, and I expressed some skepticism towards modern formal semantics. When I suggested that a more rationalist, intensionalist theory of semantics might be fruitful, they responded that, while I might be right, if I decided to pursue that line of research, I would never be hired as a semanticist. Luckily for me, of course, I’m a syntactician, but that’s still a rather chilling thing to hear. End of second aside.
Katz concludes his preface by putting his program in context, and outlining the structure of the book. I won’t bore you with the details, but only preview chapter 1 “The scope of semantics,” wherein Katz considers the question what is meaning?, and gives a shockingly sensible answer: That’s a complex question, we’ll need to answer it scientifically.