Colin Phillips on the Theory/Experiment divide.

Over on his blog, Colin Phillips has taken up the age-old theory vs experiment debate. The position he seems to take is that the contrast between theory and experiment is illusory and, therefore, the debate itself is wrong-headed. Here he is making what seems to be his main point:

There’s a terminological point here that is straightforward. Nobody own [sic] the term “theory”. All flavors of linguist are using evidence and reasoning to build generalizable accounts of how the human language system works. We all use empirical evidence, and we all develop theories. The distinction between theoreticians and experimentalists is largely a myth. Sometimes our experiments are so easy that we’re embarrassed to label them as experiments (e.g., “Does that sentence sound better to me if I take out the complementizer?”). Sometimes the experiments take a long, long time, so we get to spend less time thinking about the theoretical questions. But it’s all basically the same thing.

“Theories all the way down” by Colin Phillips

This quote includes a few mistakes which tend to muddle the debate. The first is the focus on whether a person can be strictly a theoretician or an experimentalist. Phillips says “no” and I would tend to agree, because as humans we all contain multitudes, to paraphrase Walt Whitman. It doesn’t follow from this, though, that theory and experiment are the same thing. Creators can be critics, and producers can be consumers, but this does not negate the contrasts between art and criticism, between production and consumption.

The second mistake, and this is a widespread mistake in linguistics, is that he seems to miscategorize the pen-and-paper empirical method of old-school linguistics as theoretical. Norbert Hornstein has posted about this error on his blog, a number of times, adopting from Robert Chametzky a three-way distinction between analytical, theoretical, and metatheoretical work. As Hornstein argues, most of what we call theoretical syntax, is better described as analytical—it applies theoretical constructs to data with the dual effect of testing the constructs and making sense of the data. To be sure this trichotomy takes for granted the data -gathering method, and it would be interesting to think about how that could be related to analysis. Are they independent of each other, or is the gathering a proper subpart of the analysis? Either way, I would agree with Phillips that “experimental” and “pen-and-paper” work ought to be grouped together, but I disagree that either is theoretical work.

Theoretical work is a a different beast that presents its own endemic challenges—difficulties that more analytical work does not have to address. Blurring the line between the two types of work, however, introduces additional hurdles. These hurdles usually take the form of conferences, journals, and job postings, which declare themselves to be “theoretical” but are in actuality mainly analytical. This ends up crowding out truly theoretical work which any science needs at least as much as experimental work in order to progress and flourish.

To close, why bother arguing about language use? Isn’t it fluid—always changing? I suppose it is, but I don’t particularly care what we call theory or analysis or experiment, but I do care that we recognize the distinctions between them. Please forgive the piety, but I’m a sucker for an aphorism: As Plato said, the goal of inquiry is to carve Nature at its joints, and as Confucius said “The beginning of wisdom is to call things by their proper name.”

My Top Culture Things of 2019

It’s the end of the year, which means it’s Best Of the Year season. I don’t think I saw enough movies, discovered enough new music, read enough books, or watched enough TV shows this year to make a Top 5 list for any of those mediums. What’s more, some of culture that I most enjoyed this year wasn’t from this year. But I’m not a professional critic, so who needs those restrictions…

Fontaines D.C.

I first heard of this Dublin post-punk(?) band on an episode of Sound Opinions. They were described as a 21st century The Fall, a band I never quite got into. I listened to some of their singles anyway and was very quickly drawn to them. The music is great, but where they really shine is their lyrics, which have a wonderful rhythmic poetry to them but also speak to and of everyday life in a modern city, especially the everyday life of the working class. The frontman, Grian Chatten, is like a class conscious Shaun Ryder. I strongly recommend listening to their debut album Dogrel and seeing them live when they come to your town.

Lodge 49

It seems like the TV shows worth watching are either brutal tragedies centered on deeply-flawed anti-heroes (see Breaking Bad), or cartoonish comedies with heroes who either learn a lesson or are proved right every episode (see Parks & Recreation). A few shows, however, have cropped up that, although they have some of the trappings of these two genres, are something different altogether. Lodge 49 was* one of those shows**. It many ways it seems like a contradictory show: It’s wholesome and warm without being corny or trite; it’s mystical yet it quite accurately captures how it feels to be alive right now; it’s goofy and fun but surprisingly touching; it’s warm and friendly but pervaded with themes of loss, loneliness, and alienation. Others have written better about this show so I’ll just leave two of my observations:

  • As far as I can tell, it’s the only TV show whose driving force is its characters’ search for meaning
  • I’ve never encountered a show that so vividly depicts characters who are completely lost.

Plus, there’s a scene in which Paul Giamatti’s character engages in an impromptu dumpling-eating contest.

* Moments before I started writing this I learned that the cancelled Lodge 49 would not be resurrected. The show’s creator, Jim Gavin, announced the loss in a heartfelt twitter thread.
** Also in this non-genre genre for me is Deadwood and the oeuvre of David Simon for different reasons.

Push

As most renters in any city know, the global housing market is not working. Rents are rising, even as high-rise condos are constantly being built. More frustrating, at least for me, is that when we point out this incongruity, we’re met with either a shrug or prescriptions that are worse than the problem. The documentary Push probes the sources of the housing crisis* following the UN Special Rapporteur on Housing, Leilani Farha, as she investigates the crisis and advocates for a rights-based approach to housing. If you’re already pissed at your obscene rent, you’ll leave this documentary enraged, but with ideas for constructive ways to channel that rage.

* SPOILER: It’s a mix of investment bankers, tech bros, and traditional organized crime

Jerrold Katz

Non-linguists, you can skip this one.
I’d heard about Jerrold Katz before, but never delved into his work. When I did finally open up his book Semantic Theory, which I had quite low expectations for, I was pleasantly surprised to find a fresh-to-me approach to one of the most puzzling aspects of language: meaning. Beyond this book of his, I also read Cogitations—a philosophical study which argues that Descartes’ “I think therefore I am” has been consistently misanalyzed and underappreciated by modern philosophers—Sense Reference and Philosophy—a posthumously published book which sums up Katz’s theory of language and its relation to broader philosophical questions—and many of his papers. If you’re someone who’s interested in semantics but nonplussed by formal semantics, I recommend reading Katz.

Calvary & First Reformed

I was inspired to watch both of these movies after coming across a piece by Alissa Wilkinson in The Atlantic from 2015. The piece argues that if you want films that properly portray faith, then you should skip the bible movies and the overtly christian movies and go see an indie film. Specifically, she recommends John Michael McDonagh’s black comedy Calvary. I took her advice, and extrapolated from it to also check out the more recent First Reformed*. Both are not only brilliantly crafted films—Calvary is legitimately funny in that sad Irish way, while First Reformed has a quiet, intimate urgency to it, mixed with some alienation and magical realism—but they both depict men of the cloth struggling with and questioning their faith in God and humanity.

* I also watched First Reformed on the strength of it being on Priscilla Page’s best of 2018 list. Do yourself a favour, follow Page on Twitter, and listened to her movie recommendation. She’s never steered me wrong.

The Lighthouse

There may be some recency bias here, but I’ll just leave my single tweet review here:

Another bit of sovereignty surrendered at the US Border

As I mentioned previously, I’m not very comfortable with the presence of US Customs and Border Protection agents in Canadian airports. I recently found out that the situation just got much worse. In a piece for the CBC, H.M. Jocelyn, a Rutgers PhD candidate, reports on recent amendments to the Canada-United States Preclearance Agreement, the set of laws that govern the presence of USCBP at Canada-US border crossings. According to Jocelyn, the amendments effectively allow US border guards operating on Canadian soil to countermand Canadian authorities. For example:

This new authority also allows U.S. border guards to deny Canadians their right of withdrawal. Before the amendment to the law was enacted, if a person felt at all uncomfortable in the course of preclearance questioning she could simply leave, retracting her intention to cross the border with no penalty.

Now, as a result of amendments, the guard is entitled to detain her if he finds “reasonable grounds” to do so. And the request to leave in itself could be construed as reasonable grounds.

Given that USCBP is not really goverened by the US Constitution at the border, Canadians had to hope that our own constitutional protections would be there for us at border crossings. Now, it seems, we can’t count on that anymore.

On the general character of semantic theory (Part b)

(AKA Katz’s Semantic Theory (Part IIIb). This post discusses the second half of chapter 2 of Jerrold Katz’s 1972 opus. For my discussion of the first half of the chapter, go here.

(Note: This post was written in fits and starts, which is likely reflected in its style (or lack thereof). My apologies in advance)

The first half of chapter 2 was concerned with the broader theory of language, rather than a semantic theory. In the second half of the chapter, Katz begins his sketch of the theory of semantics. It’s at this point that I pick up my review.

4. The structure of the theory of language

In this section, Katz discusses universals, which he frames, following Chomsky, as constraints on grammars. Katz differs from Chomsky, though, in how he divvies up the universals—whereas Chomsky, in Aspects, distinguishes between formal and substantive universals, Katz adds a third type: organizational universals. These classifications are defined as follows:

Formal universals constrain the form of the rules in a grammar; substantive universals provide a theoretical vocabulary from which the constructs used to formulate the rules of particular grammars are drawn; organizational universals, of which there are two subtypes, componential organizational universals and systematic organizational universals, specify the interrelations among the rules and among the systems of rules within a grammar.

p30-31

Furthermore, formal, substantive, and componential universals cross-classify with phonological, syntactic, and semantic universals. This means that we can talk about substantive phonological universals, or componential semantic universals, and so on. So, for example, a phonological theory consists in a specification of the formal, substantive, and componential universals at the phonological level, and such a specification amounts to a definition of the phonological component of the language faculty. Systematic universals, then, specify how the components of the grammar are related to each other. With this discussion, Katz sets up his goals: to specify the formal, substantive, and componential universals at the semantic level. More precisely, he aims to develop the following:

(2.7) A scheme for semantic representation consisting of a theoretical vocabulary from which semantic constructs required in the formulation of particular semantic interpretations can be drawn

p33

(2.8) A specification for the form of the dictionary and a specification of the form of the rules that project semantic representations for complex syntactic constituents from the dictionary’s representations of the senses of their minimal syntactic parts.

p33

(2.9) A specification of the form of the semantic component, of the relation between the dictionary and the projection rules, and of the manner in which these rules apply in assigning semantic representations

p3

These three aspects of semantic theory, according to Katz, represent the substantive, formal, and componential universals, respectively. A theory that contains (2.7)-(2.9), and answers questions 1-15 (as listed here) would count as an adequate semantic theory.

5. Semantic theory’s model of a semantic component

So, Katz asks rhetorically, how could it be that semantic relations, such as analyticity, synonymy, or semantic similarity, be captured in the purely formal terms required by (2.7)-(2.9)? The answer is simple: semantic relations and properties are merely formal aspects of compositional meanings of expressions. This is a bold and controversial claim: Semantic properties/relations are formal properties/relations or, to put it more strongly semantic properties/relations are, in fact, syntactic properties/relations (where “syntactic” is used is a very broad sense). Of course, this claim is theoretical and rather coarse. Katz aims to make it empirical and fine.

So, what does Katz’s semantic theory consist of? At the broadest level, it consists of a dictionary and a set of projection rules. No surprise yet; it’s a computational theory, and any computational system consists of symbols and rules. The dictionary contains entries for every morpheme in a given language, where each entry is a collection of the senses of that morpheme. Finally he defines two “technical terms.” The first is a reading which refers “a semantic representation of a sense of a morpheme, word, phrase, clause, or sentence and which is further divided into lexical readings and derived readings. The second term is semantic marker which refers to “the semantic representation of one or another of the concepts that appear as parts of senses.” Katz then continues, identifying the limiting case of semantic marker: primitive semantic markers.

Here it’s worth making a careful analogy to syntactic theory. Semantic markers, as their name suggests, are analogous to phrase markers. Each are representations of constituency: a phrase marker represents the syntactic constituents of an expression while a semantic marker represents the conceptual constituents of a concept. In each theory there are base cases of the markers: morphemes in syntactic theory and aptly named primitive semantic markers. I must stress, of course that this is only an analogy, not an isomorphism. Morphemes are not mapped to primitive semantic markers, and vice versa. Just as a simple morpheme can be phonologically complex, it can also be semantically complex. Furthermore, as we’ll see shortly, while complex semantic markers are structured, there is no reason to expect them to be structured according to the principles of syntactic theory.

Before Katz gets to the actual nitty-gritty of formalizing these notions, he pauses to discuss ontology. He’s a philosopher, after all. Semantic markers are representations of concepts and propositions, but what are concepts and propositions? Well, we can be sure of some things that they are not: images, mental ideas, and particular thoughts which Katz groups together as what calls cognitions. Cognitions, for Katz, are concrete, meaning they can be individuated by who has them, when and where they occur, and so on. If you and I have the same thought (e.g., “Toronto is the capital of Ontario”) then we had different cognitions. Concepts and propositions, for Katz, are abstract objects and, therefore, independent of space and time, meaning they can’t be individuated by their nonexistent spatiotemporal properties. They can, however, be individuated by natural languages, which Katz also takes to be abstract objects, and, in fact, are individuated easily by speakers of natural languages. Since, in a formulation echoed recently by Paul Pietroski (at around 5:45), “senses are concepts and propositions connected with phonetic (or orthographic) objects in natural languages” and the goal of linguistic theory is to construct grammars that model that connection, the question of concept- and proposition-individuation is best answered by linguistic theory.1

But, Katz’s critics might argue, individuation of concepts and propositions is not definition of “concept” or “proposition”. True, Katz responds, but so what? If we needed to explicitly define the object of our study before we started studying it, we wouldn’t have any science. He uses the example of Maxwell’s theory of electromagnetism which accurately models the behaviour and structural properties of electromagnetic waves but does not furnish any definition of electromagnetism. So if we can come up with a theory that accurately models the behaviour and structural properties of concepts and propositions, why should we demand a definition?

We also can’t expect a definition of “semantic marker” or “reading” right out of the gate. In fact, Katz argues, one of the goals of semantic theory (2.7) is to come up with those definitions and we can’t expect to have a complete theory in order to develop that theory. Nevertheless, we can use some basic intuitions to come up with a preliminary sketch of what a reading and a semantic marker might look like. For instance, the everyday word/concept “chair”, has a common sense, which is composed of subconcepts and can be represented as the set of semantic markers in (2.15).

(2.15) (Object), (Physical), (Non-living), (Artifact),
       (Furniture), (Portable), (Something with legs),
       (Something with a back), (Something with a seat),
       (Seat for one)

Of course, this is just preliminary. Katz identifies a number of places for improvement. Each of the semantic markers is likely decomposable into simple markers. Even the concept represented by “(Object)” is likely decomposable.

Or, Katz continues, we can propose that semantic markers are ways of making semantic generalizations. Katz notes that when we consider how “chair” relates to words such as “hat,” “planet,” “car,” and “molecule” compared to words such as “truth,” “thought,” “togetherness,” and “feeling.” Obviously, these words all denote distinct concepts, but just as obviously, the two groupings contrast with each other. We can think of the semantic marker “(Object)” as the distinguishing factor in these groupings: the former is a group of objects, the latter a group of non-objects. So, semantic markers, like phonological features and grammatical categories, are expressions of natural classes.

Finally, Katz proposes a third way of thinking of semantic markers: “as symbols that mark the components of senses of expressions on which inferences from sentences containing the expressions depend.” (p41) For instance we can infer (2.19) from (2.18), but we can’t infer (2.27).

(2.18) There is a chair in the room.

(2.19) There is a physical object in the room.

(2.27) There is a woman in the room.

We can express this inference pattern by saying that every semantic marker that comprises the sense of “physical object” in (2.19) is contained in the sense of “chair” in (2.18), but that is not the case for “woman” in (2.27). The sense of “woman” in (2.27) contains semantic markers like “(Female)” which are not contained in the sense of chair in (2.18). Here Katz notes that his proposal that concepts like “chair” consist of markers is merely an extension of an observation by Frege that (2.28a,b,c) are together equivalent to (2.29)

(2.28)
(a) 2 is a positive number
(b) 2 is a whole number
(c) 2 is less than 10

(2.29) 2 is a positive whole number less than 10

For Frege, “positive number”, “whole number”, and “less than 10” are all properties of “2” and marks of “positive whole number less than 10”. Katz’s extension is to say that the concepts associated with simple expressions can have their own marks.

Next, Katz discusses the notions of derived and lexical readings which are, in a sense, the inputs and outputs, respectively, of the process of semantic composition. As the name suggests, lexical readings are what is stored in the dictionary. When a syntactic object hits the semantic component of the grammar, the first step is to replace the terminal nodes with their lexical readings. Derived readings are generated by applying projection rules to the first level of non-terminal nodes, and then the next level, and so on until the syntactic object is exhausted.

The process of deriving readings, Katz asserts, must be restrictive in the sense that the interpretation of a sentence is never the every permutation of the lexical readings of its component parts. For instance, suppose the adjective “light” and the noun “book” have N and M senses in their respective lexical readings. If our process for deriving readings were unrestrictive, we would expect “light book” to have N×M senses while, in fact, fewer are available. We can see this even when we restrict ourselves to 2 senses for “light”—“low in physical weight”, and “inconsequential”—and 2 senses for “book”—“a bound collection of paper” and “a work of literature”. Restricting ourselves this much we can see that the “light book” is 2-ways ambiguous, describing a bound collection of papers with a low weight, or a work of literature whose content is inconsequential, and not a work of literature with a low weight or an inconsequential bound collection of papers. Our semantic theory, then, must be such that the compositional process it proposes can appropriately restrict the class of derived readings for a given syntactic object.

To ensure this restrictiveness, Katz proposes that the senses that make up a dictionary entry are each paired with a selectional restriction. To illustrate this, he considers the adjective “handsome” which has three senses: when applied to a person or artifact it has the sense “beautiful with dignity”; when applied applied to an amount, it has the sense “moderately large”; when applied to conduct, it has the sense “gracious or generous”. So, for Katz, the dictionary entry for “handsome” is as in (2.30).

(2.30) "handsome";[+Adj,…];(Physical),(Object),(Beautiful),
                           (Dignified in appearance),
                           <(Human),(Artifact)>
                           (Gracious),(Generous),<(Conduct)>
                           (Moderately large),<(Amount)>

Here the semantic markers in angle brackets represent the markers that must be present in the senses that “handsome” is applied to.

This solution to the problem of selection may seem stipulative and ad hoc—I know it seems that way to me—but recall that this is an early chapter in a book published in 1972. If we compared it to the theories of syntax and phonology of the time, they might appear similarly unsatisfying. The difference between Katz’s theory and syntactic and phonological theories contemporary to Katz’s theory is that syntactic and phonological theories have since developed into more formalized and hopefully explanatory theories through the collaborative effort of many researchers, while Katz’s theory never gained the traction required to spur that level of collaboration.

Katz closes out this section, with a discussion of “semantic redundancy rules” and projection rules. Rather than discuss these, I move on to the final section of the chapter.

6. Preliminary definitions of some semantic properties and relations

Here Katz shows the utility of the theory that he has thus far sketched. That is, he looks at how the semantic properties and relations identified in chapter 1 can be defined in the terms introduced in this chapter. These theoretical definitions are guided by our common sense definitions, but Katz is careful to stress that they are not determined by them. So, for instance, two things are similar when they share some feature(s). Translating this into his theory, Katz gives the definition in (2.33) for semantic similarity.

(2.33) A constituent Ci is semantically similar to a constituent Cj on a sense just in case there is a reading of Ci and a reading of Cj which have a semantic marker in common. (they can be said to semantic similar with respect to the concept φ in case the shared semantic marker represents φ)

Note that we can convert this definition into a scalar notion, so we can talk about degrees of similarity in terms of the number of shared markers. Katz does this implicitly by defining semantic distinctness as sharing no markers and synonymy as sharing all features.

Similarity is a rather simple notion, and therefore has a simple definition; others requires some complexity. For instance, analytic statements like “Liars lie” are vacuous assertions due to the fact that the the meaning of the subject is contained in the meaning of the predicate. Here, Katz gives the definition one might expect, but it is clear that more needs to be said, as the notions of subject and predicate are more difficult to define. More on this in later chapters.

A more puzzling and less often remarked upon semantic relation is antonymy—the relation that holds of the word pairs in (2.46) and of the set of words in (2.47)

(2.46) bride/groom, aunt/uncle, cow/bull, girl/boy, doe/buck

(2.47) child/cub/puppy/kitten/cygnet

Katz notes that although antonymy is generally taken to be merely lexical, it actually projects to larger expressions (e.g., “our beloved old cow”/”our beloved old bull”), and is targeted by words like “either” as demonstrated by the fact that (2.49a) is meaningful while (2.49c) is anomalous.

(2.49)
a. John is well and Mary’s not sick either.
c. John is well and Mary’s not {well/foolish/poor/dead}

In order for antonymy to be given an adequate theoretical definition, then, it must be expressed formally. Katz does this by marking semantic markers that represent antonymy sets with a superscript. For instance, “brother” and “sister” would be represented as (Sibling)(M) and (Sibling)(F), respectively. Again, this is clearly stipulative and ad hoc but that is to be expected at this stage of a theory. In fact, Katz seems to have been revising his theory up to his death, with the colour incompatibility problem—the question of why the sentence “The dot is green and red” is contradictory—occupying a the focus of a 1998 paper of his and a section of his posthumous book. Even Katz’s ad hoc solution to the problem, though, is miles ahead of any solution that could possibly be given in current formal semantics—which is bases its definition of meaning on reference—because, to my knowledge, there is no way to account for antonymy in formal semantics. Indeed, the mere fact, that Katz is able to give any theoretical definition of antonymy, puts his theory well ahead of formal semantics.

Conclusion

Katz’s rough sketch of a semantic theory is already fairly successful in that its able to provide concrete definitions of many of the semantic notions that he identifies in the first chapter.2 I don’t believe this success is due to Katz’s ingenuity, but rather to the fact that he approached theory-building as the central activity in semantic inquiry, rather than an arcane peripheral curiosity. Since the theory building is central, it can occur in tandem with analysis of linguistic intuition.

In the next chapter, Katz responds to criticisms from his contemporaries. I’m not sure how enlightening this is for modern audiences, so I might skip it. We’ll see…


  1. ^ This argument, of course, leads pretty quickly to a classic problem inherent in the notion of abstract objects: the problem of how abstract objects can interact with the physical world. We could, of course, get around this by denying that concepts and propositions are abstract but then we need to explain how two different people could have the same thought at different times, in different places. I’m not sure which is the best choice and I’m not sure that linguistics (or any science) is up to the task of deciding between the two, so I’ll just proceed by going along with Katz’s realistic attitude about abstract objects, with the caveat that it might be wrong—a kind of methodological Platonism.
  2. ^ Katz does not give definitions for presupposition or question-answer pairs here, more on that in later chapters.

Should USCBP be operating in Canadian airports?

I’ve long been uncomfortable with the fact that US Customs and Border Protection (USCBP) operates in major Canadian airports like Toronto Pearson. I don’t recall when it occurred to me that the idea of USCBP operating within Canada was odd, but I haven’t been able to shake the idea. Certainly, USCBP didn’t endear themselves to me when they held me in secondary inspection for hours at the Queenston-Lewiston border crossing, when I was on my way to visit my cousin in NY and spend a little of my hard earned money in the US. And my wariness of the organization only grew when I learned that the nearly unfettered power they held at borders and ports of entry extended well into the US, and when they proposed to collect social media account info on people entering the US.

Even with my distrust of USCBP, I still felt more or less safe going through pre-clearence at Pearson. I was still on Canadian soil, still protected by the Charter of Rights and Freedoms. The worst USCBP could do was turn me away. But my fear of USCBP is not why I’m questioning their presence in Canadian airports today. My fresh qualms about their presence on Canadian soil is due to the migrant detention centers that the USCBP is running on their southern border. Call these detention centers what you want, every indication is that they are inhumane and criminal, and I can’t quite see how the Canadian government can allow USCBP to operate on Canadian soil and not be complicit in its crimes.

Granted, allowing USCBP to perform pre-clearence in airports is a great convenience for Canadians, and may even protect us from some of the abuses of they are free to engage in on their side of the border. And yes, it is a government’s job to protect the rights of its citizens. But surely if we want to be viewed as a country that respects international law and human rights, we can’t allow an organization that is currently breaking international law violating the human rights of a great many asylum seekers to operate within out borders.

A bit of personal news

It’s been a little while since I’ve blogged here, and that’s because, for the past few weeks, I’ve been preparing for an event that occurred yesterday—I successfully defended my dissertation Explaining the resultative parameter.

Better acknowledgements are coming, but for now I want to thank everyone who had a hand in this achievement, from my supervisor, Elizabeth Cowper, and the rest of my committee (Michela Ippolito, Diane Massam, Susana Béjar, Nick LaCara, and external examiner Norbert Hornstein) to the other exceptional and supportive members of the linguistics department at UofT, to my family, and my wonderful partner, Zoe. I couldn’t have done this alone.

The day was exhilarating, and just so much fun. I didn’t take any photos, but others did, and I’ll post them below as they come in.

https://3.bp.blogspot.com/-tChp5GAClZ0/XMIbCkZLF6I/AAAAAAAACo8/mxa239hvl6s7B4-ycTyzFuk68DS0dentACLcBGAs/s640/IMG_2877.JPG
L-to-R: Susana Béjar, Nick LaCara, Michela Ippolito, Elizabeth Cowper, Me (looking dour for some reason), Diane Massam, Norbert Hornstein (Photo by Jennifer McCallum

On the general character of semantic theory (Part a)

(AKA Katz’s Semantic Theory (Part IIIa). This post discusses chapter 2 of Jerrold Katz’s 1972 opus. For my discussion of chapter 1, go here.)

Having delineated in chapter 1 which questions a semantic theory ought to answer, Katz goes on in chapter 2 to sketch the sort of answer that a such a theory would give. He starts at a very high level, discussing the very notion of natural language and ends up with some of the formal details of the theory that he aims to develop.

Katz begins by reminding the reader that the questions of meaning—questions 1–15 below—are absolute questions. That is, they aren’t meant to be relativized to any particular language.

  1. What are synonymy and paraphrase?
  2. What are semantic similarity and semantic difference?
  3. What is antynomy?
  4. What is superordination?
  5. What are meaningfulness and semantic anomaly?
  6. What is semantic ambiguity?
  7. What is semantic redundancy?
  8. What is semantic truth (analyticity, metalinguistic truth, etc.)?
  9. What is semantic falsehood (contradiction, metalinguistic falsehood, etc.)?
  10. What is semantically undetermined truth or falsehood (e.g., syntheticity)?
  11. What is inconsistency?
  12. What is entailment?
  13. What is presupposition?
  14. What is a possible answer to a question?
  15. What is a self-answered question?

So, asking What is semantic truth in English? is kind of like asking What is a hiccup to a Canadian?. This, Katz acknowledges, makes a strong empirical claim, namely, that every natural language should exhibit the properties whose definitions are requested by questions 1–15.

As a syntactician, this claim made me think about what notions I would include the field of syntax as universal in this sense. Notions like sentence or phrase would certainly be there, and category would likely be there. Would subject, predicate, object, and the like be there? Would modification, or transformation? How about interrogative, declarative, imperative, etc? Notions like word/morpheme, or linear precedence, certainly were included in early versions of syntax, but more recently they tend to either be banished from the theory or dissolved into other notions.

I know of very few syntactitians who ask these questions. Perhaps this is because syntax has decidedly moved beyond the early stage in which Katz found semantics in 1972, but it still behooves us to keep those questions in mind, if only for the purposes of introducing syntax to students. Furthermore, perhaps if we keep these questions in mind, they can serve as a guide for research. Before embarking to answer a research question, the researcher would try to trace that question back to one of the basic questions to judge its likely fruitfulness. I would be curious to see how the papers in, say, LI would fare under such an analysis. But I digress.

Katz continues, asserting that a theory of linguistic meaning must be embedded in a larger theory of natural language, and in order to develop such a theory we must have some sense of what sort of thing a natural language might be. It is this question that occupies the first part of this chapter

1. Theories about the objective reality of language


The first thing Katz does here is distinguish between the two main competing conceptions of language (at least the main conceptions of his day): the traditional rationalist conception of language as “the internalized rules of grammar that constitute the fluency of its native speakers”, and the empiricist conception of language as “a vast stock of sound chunks classifiable into various phonological and syntactic categories” (p12). He opts for rationalism, citing the now familiar arguments against the empiricist stance. First off, we can’t identify a language L with the set S of all actual utterances of L because any competent speaker of L can easily construct an expression that lies outside of S. This is because although practical factors force every expression of a language to be of finite length, there is no theoretical limit to the length of an expression; no matter the length of an expression, there is always a grammatical way of lengthening it.

One could, Katz continues, expand S to be the set of all expressions that a speaker of L could utter without eliciting an odd response from a hearer. However, this amounts to defining L in terms of dispositions of a speech community, namely the dispositions to accept or reject strings of L. In practical reality, though, these dispositions can be wildly inconsistent depending on a variety of psychological and external factors, so if we want a consistent definition we need to clean up our notion of dispositions. Katz does so by “incorporating recursive mechanisms of sentence generation” (p15), or, as they’re more commonly referred to, generative grammars. And once we incorporate generative grammars, we have a rationalist conception of natural language.

Thus far, there’s nothing too surprising. Katz gives us a fairly standard argument in favour of the rationalist conception of language. But this is where Katz’s discussion gets a little strange; this is where he reveals his realist (in the philosophical sense) view of language. It is a mistake, he argues, to identify, say, English with the actual internalized rules in English-speakers’ brains. This would be like “identifying arithmetic with the concrete realizations of the mathematical rules in the heads of those who can compute using positive real numbers” (p16). As evidence for this claim, Katz cites “dead languages” like Sanskrit, which seems to exist (we can make true or false assertions of it) even though its rules are not actualized in any human’s brain the way that Hindi-Urdu’s rules are. Although he doesn’t say it explicitly here, Katz is arguing that languages are abstract entities, like platonic forms. In his own words: “A language is not itself subject to the fate of the mortals who speak it. It is some sort of abstract entity, whatever it is that this means.” (p16)

Katz further defends this view by identifying it with the standard scientific practice of idealization. So a natural languages like, say, Punjabi and a biological species like homo sapiens is an idealization in that they can’t be defined in terms of concrete examples. Similarly the notions of ideal gases, perfect vacuums, and massless strings are the idealizations of physics. He also cites Chomsky’s discussion in Aspects of the “ideal speaker-listener” and Rudolph Carnap who makes a similar observation, that one cannot directly investigate language but must do so by comparison to a constructed language.

Katz’s proposal and argument that languages are abstract entities strikes me as interesting but a bit confused. Katz’s argument from dead languages is compelling, and could perhaps be made even stronger. Consider for instance, reconstructed languages such as Proto Indo-European or Proto Algonquian. At best we know a scant few details about these languages, but we can say with some certainty that they were each spoken by some speech community. Do they exist in the same sense as Sanskrit does? I think the answer has to be yes, as the only difference between a reconstructed language and a dead language seems to be a written record of that language, and that is clearly not the difference between a language and a non-language.

The argument based on idealization, though. seems to be slightly confused. The comparison of a language with a species does seem to be apt, and might point towards his conclusion, but the comparison to ideal gases etc. I think suggests a different notion of idealization, the one that I’ve always taken Chomsky to be using. Under this sense, the idealized objects that scientists employ are not hypothesized to be real, but rather to be useful. I don’t believe even the most realist of scientists believes in the existence of frictionless planes. Scientists use these idealizations to reveal real, but non-apparent aspects of the world. In discussing the ideal speaker-listener, Chomsky was not suggesting that such a person exists, just that we ought to use this idealized person to help reveal a real aspect of the world, namely, the human language faculty.

2. Effability

In the next section Katz espouses what he calls the principle of effability, which he attributes to a number of earlier philosophers (Frege, Searle, and Tarski). The essence of the principle is roughly that if a proposition or thought is expressible in any language, it is expressible in every language. He spends a good chunk of text defending and sharpening his principle, but I’ll set that discussion aside here, and focus on why he proposes this principle. According to Katz, “effability alone offers a satisfactory basis for drawing the distinction between natural languages, on the one hand, and systems of animal communication and artificial languages, on the other” (p22). Despite this bold seeming claim, Katz is rather hesitant regarding his principle. He admits that it is rather inchoate and probably not yet up to any empirical task. But only part of his claim is about the viability of effability, the other claim is that no other property of natural language can distinguish it from other similar systems.

In particular, Katz takes aim at the properties that Chomsky tends to highlight as distinguishing factors for natural language: creativity, stimulus freedom, and appropriateness. Taking these one-by-one, he argues that none of them is unique to natural language. First, he considers creativity which he takes to be the ability of a speaker-listener to produce and understand indefinitely many sentences. This, Katz argues is a property of (a) any artificial language with recursive rules, and (b) certain animal communication systems, specifically bee communication. Next, Katz takes on stimulus freedom, which he argues means freedom from external stimuli, asserting that “[i]t cannot mean freedom from the control of internal stimuli as well.”1 This being the case, says Katz, stimulus freedom doesn’t make sense as a distinction. Also, he asserts that some animal behaviour displays such stimulus freedom. Finally, Katz argues that appropriateness is not part of linguistic competence—that it is extragrammatical, and also that some animal behaviour displays this property.

I take some issue with Katz’s critiques of each of the distinguishing properties individually, but I’ll set that aside for now to highlight a broader issue. Even if we take Katz’s critiques at face value, they still don’t refute Chomsky’s claim, because Chomsky’s Cain isn’t that each of the three properties distinguishes natural language, but that the conjunction of the three is what distinguishes natural language. That is, natural language is distinct from animal communication and artificial language in that it is creative, stimulus-free, and appropriate. So, for instance, even if a bee can produce novel dances, it does so in response to a stimulus. Artificial language might be creative, but it makes little sense to talk about stimulus freedom or appropriateness with respect to them. So Katz’s critiques don’t really have that much force.

At any rate, the principle of effability, while an interesting notion, doesn’t seem to be too crucial for Katz’s theory. The index of the book lists only one reference to effability outside this section. So, on to the next.

3. Competence and Performance

In the final table-setting section of this chapter, Katz takes up and defends Chomsky’s competence/performance distinction. His discussion, though, differs from most that I’ve encountered in that he uses a debate between Chomsky and Gilbert Harman, one of Chomsky and Katz’s empiricist contemporaries. Katz first clears a significant portion of underbrush in this debate in order to get to what he takes to be the crux of the issue: the proposal that linguistic competence consists in the unconscious knowledge of general principles. He summarizes Harman’s issue, which seems to revolve around the notion of grammatical transformations, as follows.

[G]iven that we can say that speakers of a language know that certain sentences are ungrammatical, certain ones ambiguous, certain ones related in certain ways to others, and so on, what licenses us to go further and say that speakers know (tacitly) the linguistic principles whose formalization in the grammar explain the noted ungrammaticality, ambiguity, sentential relations and the like?

(p28)

This challenge, Katz seems to argue, is not based on the empiricist/rationalist debate in epistemology, but rather on the realist/fictionalist argument in the philosophy of science.2 Harman is saying that a transformational grammar is maybe a good model of a speaker-listener of a given language, but it’s just that, a model. Katz responds, with the help of a quote from his erstwhile co-author, Jerry Fodor, that the only sensible conclusion to be drawn from the empirical accuracy of a scientific theory is that the theory is a true description of reality, at least insofar as it is empirically accurate. There is, of course much more to say about this, but I’ll leave it there.

Thus, Katz sets up his conception of language in order to be able to sketch a theory of semantics within a theory of language. In my next post I will take up the details of that sketch.


  1. ^ Katz cites Cartesian Linguistics for Chomsky’s distinguishing factors, and it’s likely that CL doesn’t discuss stimulus-freedom too extensively. In more recent discussion, though, Chomsky does include internal stimuli in the property of stimulus freedom, so, it’s not clear that Katz’s critique here still holds.
  2. ^ I suspect that there is no strong demarcation between epistemology and philosophy of science, but I can’t say with any confidence one way or the other.

The Scope of Semantics

(AKA Katz’s Semantic Theory (Part II). This post discusses chapter 1 of Jerrold Katz’s 1972 opus. For my discussion of the preface, go here.)

If you’ve taken a semantics course in the past decade or two, or read an introductory textbook on the topic published in that time span, you probably encountered, likely at the outset, the question What is meaning? followed almost immediately with a fairly pat answer. In my experience, the answer given to that question was reference1—the meaning of an expression, say dog, is the set of things in the world that that expression refers to, the set of all dogs in this case. Now, I can’t exactly recall my reaction the first time a teacher presented that as an answer to the question of meaning. I might have been wholly unimpressed, or I might have had my mind blown, that way that an impressionable young mind can be blown by someone giving a pat, confident answer to a deep question. Either way, I know that every time I’ve heard that answer2 to the question of meaning since, it’s become less impressive, to the point of being slightly offensive. At best, a pat answer is incomplete; at worst, it’s flat wrong.

Of course, I never really had a better answer to the question of meaning, and most of the other answers on offer seemed much worse. I couldn’t shake the unease I had with reference as an answer, but I couldn’t fully articulate that unease. Which is why I was very quickly drawn into Semantic Theory—Katz pinpoints and articulates the source of that unease on page 3 of the book:

The misconception, it seems to me, lies in the supposition that the question “What is meaning” can be answered in a direct and straightforward way. The question is generally treated as if it were on par with questions like “What is the capital of France?” to which the direct and straightforward answer “Paris” can be given. It is supposed that an answer can be given of the form “Meaning is this or that.” But the question “What is meaning?” does not admit of a direct “this or that” answer; it’s answers is instead a whole theory [emphasis added]. It is not a question like “What is the capital of France?” “When did Einstein retire?” “Where is Tasmania?”because it is not merely a request for an isolated fact, a request which can be answered simply and directly. Rather it is a theoretical question, like “What is matter?” “What is electricity?” “What is light?”

(Katz 1972, p3)

Imagine if, instead of developing theories of matter, electricity, and light, the early physicists had been satisfied with giving a simple answer like Matter is anything you can touch and feel. We wouldn’t have a science of physics, or chemistry. We likely wouldn’t have any science as we know it.

Katz goes on to acknowledge that, if one were to ask a physicist what electricity is, they might give a simple answer, but notes that such an answer would be a highly condensed version of the theory of electromagnetism that has been developed over centuries of inquiry. Similarly, if you were to ask a phonologist what a syllable is, or what pronunciation is, or if you asked a syntactician what a sentence is, or what grammar is, you might get a similar condensed answer with a several big caveats. You certainly wouldn’t get a simple straightforward answer. In fact, one of the first tasks in any introduction to linguistics is to disabuse students of any simple answers that they may have internalized, and even to disabuse them of the notion that simple answers to such questions even exist.

This seems to leave us in a bit of a bind. If we don’t know what meaning is, how can we study it? Katz’s response: the same way we did with chemistry, biology, phonology, etc.—We identify a set of phenomena that are definitely under the umbrella of meaning, and go from there. Not to disappoint, Katz identifies 15 such phenomena which he frames as subquestions to the meaning question:

  1. What are synonymy and paraphrase?
  2. What are semantic similarity and semantic difference?
  3. What is antynomy?
  4. What is superordination?
  5. What are meaningfulness and semantic anomaly?
  6. What is semantic ambiguity?
  7. What is semantic redundancy?
  8. What is semantic truth (analyticity, metalinguistic truth, etc.)?
  9. What is semantic falsehood (contradiction, metalinguistic falsehood, etc.)?
  10. What is semantically undetermined truth or falsehood (e.g., syntheticity)?
  11. What is inconsistency?
  12. What is entailment?
  13. What is presupposition?
  14. What is a possible answer to a question?
  15. What is a self-answered question?

A formidable list to be sure, but, as far as I can tell, modern formal semantics only cares about 11–143. Katz expands on each of these with representative examples. I won’t go into those examples, but they all are based on intuitions that a person would have about linguistic meaning. If one takes these as the leading questions of semantic theory, Katz argues, then the simple answers to the meaning question lose their appeal, as they do not answer the subquestions 1–15, or at least cannot do so without a complex semantic theory to supplement them.

Furthermore, Katz points out that the debates between the competing simple answers all use arguments based on the phenomena that 1–15 as about. Take, for instance, the best known critique of the referentialist answer. If we assume that meaning=reference, then any two expressions that have the same referent, must be synonymous. Gottlob Frege, the godfather of formal semantics, argued that there were expressions which had different meanings but had the same referent, the classic example of which is the morning star and the evening star. The two expressions have different meanings (they differ as to when the star appears in the sky), however they refer to the same object (the planet Venus). And once you start to think about it you can come up with a seeming infinity of such examples.

Katz goes on to show that critiques of other simple answers to the meaning question are based on what hes call “strong pretheoretical intuitions,” all of which raise at least one of questions 1–15. His point here seems to be that we can’t divorce our semantic theory from pretheoretical intuitions such as the ones the form the basis of 1–15, so why not just embrace it? Why not throw away the “leading principles” and just try to build a theory that answers 1–15?

Katz closes the chapter by discussing skepticism with regards to meaning. It’s hard to honestly maintain skepticism, he argues, when we can marshal an extensive body of evidence that meaning exists. That body of evidence starts with an explication of 1–15, but likely extends beyond that. It is even harder to honestly maintain skepticism if we can build a theory that shows the regular and law-like behaviour of the evidence marshaled. Taking a suggestion from Quine (who played a major role in the preface), Katz compares the situation that he finds himself in to that which ancient astronomers found themselves in:

Astronomy found its answer to “What are planets?” by constructing a theory that explained planetary motion on the assumption that planets are physical objects that obey standard mechanical laws. In the same spirit, once we construct a theory that can successfully explain a reasonably large portion of semantic phenomena, we can base our answer to “What is meaning?” on what the theory had to assume meaning was in order to provide its explanations.

(Katz 1972, p10

Semantics, as it is taught and studied today, is commonly considered by non-semanticists to be the most arcane and opaque subfield of linguistics. It’s not clear what is more obscure, the questions that semanticists ask or the formalism that they use to answer those questions. I often wonder if there is something endemic to questions of meaning that make them seem arcane to many, or if it is a failing in the standard answer that leads to this feeling. This chapter of Katz’s book, for me, rules out the former. The questions in 1–15 are far from arcane, or, at least, they’re no more arcane than the questions that occupy the other subfields of linguistics. Maybe if we took Katz’s view of semantics, fewer students would run screaming (or grumbling, or yawning) from semantics classes.

In the next chapter, entitled “On the general character of semantic theory” Katz begins constructing his theory.


Footnotes (the links might not work, sorry)

  1. ^ I learned my semantics in a generative department where reference was the answer. Other departments might have had another answer.
  2. ^ and sometimes I’ve even given that answer as a teacher.
  3. ^ Entailment and inconsistency are the key phenomena. Presuppositions are useful as diagnostics. Questions have only recently gained currency lately it seems.

Instrumentalism in Linguistics

(Note: Unlike my previous posts, this one is not aimed at a general audience. this one’s for linguists)

As a generative linguist, I like to think of myself as a scientist. Certainly, my field is not as mature and developed as physics, chemistry, and biology, but my fellow linguists and I approach language and its relation to human psychology scientifically. This is crucial to our identity. Sure our universities consider linguistics a member of the humanities, and we often share departments with literary theorists, but we’re scientists!

Because it’s so central to our identity, we’re horribly insecure about our status as scientists. As a result of our desire to be seen as a scientific field, we’ve adopted a particular philosophy of science without even realizing it: Instrumentalism.

But, what is instrumentalism? It’s the belief that the sole, or at least primary, purpose of a scientific theory is its ability to generate and predict the outcome of empirical tests. So, one theory is preferable to another if and only if the former better predicts the data than the latter. A theory’s simplicity, intelligibility, or consistency is at best a secondary consideration. Two theories that have the same empirical value can then be compared according to these standards. Generative linguistics seems to have adopted this philosophy, to its detriment.

What’s wrong with instrumentalism? Nothing per se. It definitely has its place in science. It’s perfectly reasonable for a chemist in a lab to view quantum mechanics as an experiment-generating machine. In fact, it might be an impediment to their work to worry about how intelligible QM is. They would be happy to leave that kind of thinking to the theorists and philosophers while they, the experimenter, used the sanitized mathematical expressions of QM to design and carry out their work.

“Linguistics is a science,” the linguist thinks to themself. “ So, linguists ought to behave like scientists.” Then with a glance at the experimental chemist, the linguist adopts instrumentalism. But, there’s a fallacy in that line of thinking: Instrumentalism being an appropriate attitude for some people in a mature science, like chemistry, does not mean it should be the default attitude for people in a nascent science, like linguistics. In fact, there are good reasons for instrumentalism to be only a marginally acceptable attitude in linguistics. Rather, we should judge our theories on the more humanistic measures of intelligibility, simplicity, and self-consistency in addition to consistency with experience.

What’s wrong with instrumentalism in linguistics?

So why can’t linguists be like the chemist in the lab? Why can’t we read the theory, develop the tests of the theory, and run them? There are a number of reasons. First, as some philosophers of science have argued, It is never the case that a theoretical statement is put to the test by an empirical statement, but rather the former is tested by the latter in light of a suite of background assumptions. So, chemists can count the number of molecules in a sample of gas if they know its pressure, volume, and temperature. How do they know, say, the temperature of the gas sample? They use a thermometer, of course, an instrument they trust by virtue of their background assumptions regarding the how matter, in general, and mercury, in particular, are affected by temperature changes. Lucky for chemists, those assumptions have centuries worth of testing and thinking behind them. No such luck for generative linguists, we’ve only got a few decades of testing and thinking behind our assumptions, which is reflected by how few empirical tools we have and how unreliable they are. Our tests for syntactic constituency are pretty good in a few cases — good enough to provide evidence that syntax traffics in constituency — but they give way too many false positives and negatives. Their unreliability means real syntactic work must develop diagnostics which are more intricate and which carry much more theoretical baggage. If a theory is merely a hypothesis-machine, and the tools for testing those hypotheses depend on the theory, how can we avoid rigging the game in our favour?

Suppose we have two theories, T1 and T2, which are sets of statements regarding an empirical domain D. T1 has been rigorously vetted and found to be internally consistent, simple, and intelligible, and predicts 80% of the facts in D. T2 is rife with inconsistencies, hidden complexities, and opaque concepts, but covers 90% of the facts in D. Which is the better theory? Instrumentalism would suggest T2 is the superior theory due to its empirical coverage. Non-dogmatic people might disagree, but I suspect would all be uncomfortable with instrumentalism as the sole arbiter in this case.

The second problem, which exacerbates the first, is that there’s too much data, and it’s too easy to get even more. This has resulted in subdisciplines being further divided into several niches each devoted to a particular phenomenon or group of languages. Such a narrowing of the empirical domain, coupled with an instrumentalist view of theorizing, has frequently led to the development of competing theories of that domain, theories which are largely impenetrable to those conversant with the general theory but uninitiated with the niche in question. This is a different situation from the one described above. In this situation T1 and T2 might each cover 60% of a subdomain D’, but those 60% are overlapping. Each has a core set of facts that the other cannot, as yet, touch, so the two sides take turns claiming parts of the overlap as their sole territory, and no progress is made.

Often it’s the case that one of the competing specific theories is inconsistent with the general theory, but proponents of the other theory don’t use that fact in their arguments. In their estimation the data always trumps theory, regardless of how inherently theory-laden the description of the data is. It’s as if two factions were fighting each other with swords despite the fact that one side had a cache of rifles and ammunition that they decided not to use.

The third problem, one that has been noted by other theory-minded linguists here and here, is that the line between theoretical and empirical linguistics is blurry. To put it a bit more strongly, what is called “theoretical linguistics” is often empirical linguistics masquerading as theoretical. This assertion becomes clear when we look at the usual structure of a “theoretical syntax” paper in the abstract. First, a grammatical phenomenon is identified and demonstrate. After some discussion of previous work, the author demonstrates the results of some diagnostics and from those results gives a formal analysis of the phenomenon. If we translated this into the language of a mature science it would be indistinguishable from an experimental report. A phenomenon is identified and discussed, the results of some empirical techniques are reported, and an analysis is given.

You might ask “So what? Who cares what empirical syntacticians call themselves?” Well, if you’re a “theoretical syntactician,” then you might propose a modification of syntactic theory to make your empirical analysis work, and other “theoretical syntacticians” will accept those modifications and propose some modifications of their own. It doesn’t take too long in this cycle before the standard theory is rife with inconsistencies, hidden complexities, and opaque concepts. None of that matters, however, if your goal is just to cover the data.

Or, to take another common “theoretical” move, suppose we find an empirical generalization, G (e.g., All languages that allow X also allow Y), the difficult task of the theoretician is to show that G follows from independently motivated theoretical principles. The “theoretician,” on the other hand, has another path available, which is to restate G in “theoretical” terms (e.g., Functional head, H, is responsible for both X and Y), and then (maybe) go looking for some corroboration. Never mind that restating G in different terms does nothing to expand our understanding of why G holds, but understanding is always secondary for instrumentalism.

So, what’s to be done?

Reading this, you might think I don’t value empirical work in linguistics, which is simply not the case. Quite frankly, I am constantly in awe of linguists who can take a horrible mess of data and make even a modicum sense out of it. Empirical work has value, but linguistics has somehow managed to both over- and under-value it. We over-value it by tacitly embracing instrumentalism as our guiding philosophy. We under-value it by giving the title “theoretical linguist” a certain level of prestige. We think empirical work is easier and less-than. This has led us to under-value theoretical work, and view theoretical arguments as just gravy when they’re in our favour, and irrelevancies when they’re against us.

What we should strive for, is an appropriate balance between empirical and theoretical work. To get to that balance we must do the unthinkable and look to the humanities. To develop as a science, we ought to look at mature sciences, not as they are now, but as they developed. Put another way, we need to think historically. If we truly want our theory to explain the human language faculty, we need to accept that we will be explaining it to humans and designing a theory that another human can understand requires us to embrace our non-rational qualities like intuition and imagination.

In sum, we could all use a little humility. Maybe we’ll reach a point when instrumentalism will work for empirical linguistics, but we’re not there yet, and pretending we are won’t make it so.

Tarring Universal Grammar with the Brexit brush

Over at Psychology Today, Vyv Evans, cognitive linguist and UG critic, has written a piece criticizing generative linguistics, and those who defend its practice. In particular he criticizes what he sees as the shape-shifting nature of UG.

I don’t want to address the substance of Evans’ piece, but rather a rhetorical choice he makes, specifically, his choice to compare UG to Brexit. (There was at least one other bit of rhetoric that bothered me, but I’ll save that for a later post.) A choice that serves no other purpose than to give the reader a negative emotional impression of UG. “Brexit is bad. UG is like Brexit in some way. Therefore UG is bad.”

So, what do UG and Brexit have in common? Well, Evans begins by discussing the fact that members of the UK government can’t seem to reach a consensus as to what Brexit means. In Evans’ words:

…[T]here are now, perhaps, as many versions of Brexit as there are governmental ministers; moreover, each minister’s version of Brexit seems to change each time they are interviewed on the subject. Brexit is a shape-shifting chimera, as the UK government attempts to square the impossible circle of respecting the referendum result, and democracy, while attempting to avoid destroying the UK’s economy.

UG, in Evans’ estimation is the same. Ask two generative grammarians what Universal Grammar means and you’ll get two distinct answers. Ask them again in a year, you’ll get two more answers. This leads Evans to the question of why generative grammarians can’t just settle on a definition of UG, to which he offers the answer: Because it isn’t real.

Seems pretty damning, but let’s push on it just a bit. If the “shape-shifting” nature of generative linguistics follows from the fact that UG isn’t real, does that mean that Brexit also isn’t real? Surely, Evans doesn’t believe that Brexit is merely a figment of the UK government’s collective imagination. I don’t think he does, but I do think he knows that most of his readers wish Brexit were just a dream they could awake from. And maybe now they want UG theory to be false.

And if “shape-shifting” is a sign that UG is false, why bother with the talk of Brexit? Why not write an article surveying the many ways UG theory has changed and been reinterpreted since the 50’s, and how current generative grammarians still debate each other on what UG is and isn’t? Perhaps because that wouldn’t make it as easy for the reader to conclude that UG theory is patently false.

Just to drive my point home, let’s consider an article Evans could have written. One with the same logical structure, but vastly different emotional structure.

What do Quantum Mechanics and Universal Grammar have in common?
Ask a layperson about quantum mechanics and they may tell you something about Schrödinger’s cat being both alive and dead until we observe it, but ask a physicist or a philosopher of physics, and who knows what they’ll say. They may talk about superposition or the randomness of the universe. They might talk about guiding waves. They might even talk about multiple realities. Or maybe they’ll just show you some equations and go back to their experiments. In fact if you ask any number of physicists what quantum mechanics is you’ll get the same number of responses which differ from each other to varying degrees. And if you look at Universal Grammar theory, you’ll find a similar situation. Probably because UG simply isn’t real.

Such an article would never be published by Psychology Today, and any academic who wrote it would be laughed out of academia, and scolded for suggesting that quantum mechanics might be a lie.

And to be perfectly clear, anyone who used such a comparison to quantum mechanics to assert Universal Grammar’s validity would be equally as wrong, because (and I assume that Evans knows this, because I assume him to be neither stupid, nor irrational.) just because two things share one property doesn’t mean they share any other properties.

Quite frankly, I’m disappointed by Evans’ article. I’m disappointed that he’s resorted to this kind of emotional appeal rather than well-reasoned criticism against UG. Academics, like artists and politicians, need good criticism to keep them honest, because theories are made stronger not only by adding good arguments in their favour, but also by discarding bad arguments in their favour.