The DP Hypothesis—a case study of a sticky idea

Recently, in service of a course I’m teaching, I had a chance to revisit and fully engage with what might be the stickiest idea in generative syntax—The DP hypothesis. For those of you who aren’t linguists, the DP hypothesis, though highly technical, is fairly simple to get the gist of based on a couple of observations:

Observation 1: Words in sentences naturally cluster together into phrases like “the toys”, “to the store”, or “eat an apple.”

Observation 2: In every phrase, there is a single main word called the head of the phrase. So, for instance, the head of the phrase “eat an apple” is the verb “eat.”

These observations are formalized in syntactic theory, so that “eat an apple” is labeled a VP (Verb Phrase), while “to the store” is a PP (Preposition Phrase). Which leads us to the DP hypothesis: Phrases like “the toys,” “a red phone,” or “my dog” should be labelled as DPs (Determiner Phrases) because their heads are “the,” “a,” and “my,” which are called determiners in modern generative syntax.

This is fairly counterintuitive, to say the least. The intuitive hypothesis—the one that pretty much every linguist accepted until the 1980s—is that those phrases are NPs (Noun Phrases), but if we only accepted intuitive proposals, there’d be no science to speak of. Indeed, the all the good scientific theories start off counterintuitive and become intuitive only by force of argument. One of the joys of theory is experiencing that shift of mind-set—it can feel like magic when done right.

So it was quite unnerving when I started reading the actual arguments for the DP hypothesis, which I had, at one point, fully bought into, and and began to become less convinced by each one. It didn’t feel like magic, it felt like a con.

My source for this is a handbook chapter by Judy Bernstein that summarizes the basic argument for the DP Hypothesis—a twofold argument consisting of a Parallelism argument and purported direct evidence of the DP Hypothesis— as previously advanced sand developed by Szabolcsi, Abney, Longobardi, Kayne, Bernstein herself, and others.

The parallelism argument is based on another counterintuitive theory developed in in the mid-20th century which states that clauses, previously considered either headless or VPs, are actually headed by abstract (i.e., silent) words. That is, they are variously considered TPs (Tense Phrases), IP’s (Inflection Phrases), or CPs (Complementizer Phrases). The parallelism argument states that “if clauses are like that, then ‘noun phrases’ be like that too” and then finds data where “noun phrases” look like clauses in some way. This might seem reasonable on its face, but it’s a complete non sequitur. Maybe the structure of a “noun phrase” parallels that of a clause, but maybe it doesn’t. In fact, there’s probably good reason to think that the structure of “noun phrases” is the inverse of the structure of the clause—the clause “projects” from the verb, and verbs and nouns are complementary, so shouldn’t the noun have complementary properties to the verb?

Following through on parallelism, if extended VPs are actually CPs, then extended NPs are DPs. Once you have that hypothesis, you can start making “predictions” and checking if the data supports them. And of course there is data that becomes easy to explain once we have the DP Hypothesis. Again, this is good as far as it goes, but there’s a key word missing—”only.” We need data that only becomes easy to explain once we have the DP Hypothesis. And while I don’t have competing analyses for the data adduced for the DP Hypothesis at the ready—though Ben Bruening has one for at least one such phenomenon—I’m not really convinced that none exist.

And that’s the foundation of the DP Hypothesis, a weak argument resting on another weak argument. Yet, it’s a sticky one—I can count on one hand the contemporary generative syntacticians that have expressed skepticism about it. Why is it so sticky? My hypothesis is that it’s useful as a shibboleth and as a “project pump”.

Its usefulness as a shibboleth is fairly straightforward—there’s no quicker way to mark yourself as a generative syntactician than to put DPs in your tree diagrams. Even I find it jarring to see NPs in trees.

To see the utility of the DP Hypothesis as a “project pump”, one need only to look at the Cartography/Nanosyntax literature. Once you open up a space for invisible functional heads between N and D, you seem to find them everywhere. This, I think, is what Chomsky meant when he described the DP Hypothesis as “…very fruitful, leading to a lot of interesting
work” before saying “I’ve never really been convinced by it.” Who cares if it’s correct, it contains infinite dissertations!

Now maybe I’m being to hard on the DP and its fans. After all, as far as theoretical avenues go, the DP Hypothesis is something of a cul de sac, albeit a large one—the core theory doesn’t really care whether “the bee” is a DP or and NP, so what’s the harm? I could point out that by maiking such a feeble hypothesis our standard, we’ve opened ourselves to being dunked on my anti-generativists. Or I could bore you with such Romantic notions as “calling all things by their right names.” Instead, I’ll be practical and point out that, contrary to contemporary digital wisdom, the world is not infinite, and every bit of real estate given to the DP cul-de-sac in the form of journal articles, conference presentations, tenure-track hires, etc. is space that could be used otherwise. And, to torture the metaphor further, shouldn’t we try to use our real estate for work with a stronger foundation?

How do we get good at using language?

Or: What the hell is a figure of speech anyway?

At a certain level I have the same level of English competence as Katie Crutchfield, Josh Gondelman, and Alexandria Ocasio-Cortez. This may seem boastful to a delusional degree of me, but we’re all native speakers of a North American variety of English of a similar age, and this is the level of competence that linguists tend to care about. Indeed, according to our best theories of language, the four of us are practically indistinguishable.

Of course, outside of providing grammaticality judgements, I wouldn’t place myself anywhere near those three, each of whom could easily be counted among the most skilled users of English living. But what does it mean for people to have varied levels of skill in their language use? And is this even something that linguistic theory should be concerned about?

Linguists, of course, have settled on 5 broad levels of description of a given language

  1. Phonetics
  2. Phonology
  3. Morphology
  4. Syntax
  5. Semantics

It seems quite reasonable to say we can break down language skill along these lines. So, skilled speakers can achieve a desire effect by manipulating their phonetics, say by raising their voices, hitting certain sounds in a particular way, or the like. Likewise, phonological theory can provide decent analyses of rhyme, alliteration, rhythm etc. Skilled users of a language also know when to use (morphologically) simple vs complex words, and which word best conveys the meaning they intend. Maybe a phonetician, phonologist, morphologist, or semanticist, will disagree, but these seem like fairly straightforward to formalize, because they all involve choosing from among a finite set of possibilities—a language only has so many lexical entries to choose from. What does skill mean in the infinite realm of syntax? What does it mean to choose the correct figure of speech? Or even more basically, how does one express any figure of speech in the terms of syntactic theory?

It’s not immediately obvious that there is any way to answer these questions in a generative theory for the simple reason that figures of speech are global properties of expressions, while grammatical theory deals in local interactions between parts of expressions. Take an example from Abraham Lincoln’s second inaugural address:

(1) Fondly do we hope—fervently do we pray—that this mighty scourge of war may speedily pass away.

There are three syntactic processes employed by Lincoln here that I can point out:

(2) Right Node Raising
Fondly do we hope that this mighty scourge of war may speedily pass away, and fervently do we pray that this mighty scourge of war may speedily pass away. -> (1)

(3) Subject-Aux Inversion
Fondly we hope … -> (1)

(4) Adverb fronting
We hope fondly… -> (1)

Each of these represents a choice—conscious or otherwise—that Lincoln made in writing his speech and, while most generative theories allow for choices to be made, they are not at the same levels.

Minimalist theories, for instance, allow for choices at each stage of sentence construction—you can either move constituent, add a constituent, or stop the derivation. Each of (3) and (4) could conceivably be represented as a single choice, but it seems highly unlikely that (2) could. In fact, there is nothing approaching a consensus as to how right node raising is achievable, but it is almost certainly a complex phenomenon. It’s not as if we have a singular operation RNR(X) which changes a mundane sentence into something like (1), yet Lincoln and other writers and orators seem to have it as a tool in their rhetorical toolboxes.

Rhetorical skill of this kind suggest the possibility of a meta-grammatical knowledge, which all speakers of a language have to some extent, and which highly skilled users have in abundance. But what could this meta-grammatical knowledge consist of? Well, if the theoretical representation of a sentence is a derivation, then the theoretical representation of a figure of speech would be a class of derivations. This suggests an ability to abstract over derivations in some way and therefore, it suggests that we are able to acquire not just lexical items, but also abstractions of derivations.

This may seem to contradict the basic idea of Minimalism by suggesting two grammatical systems and indeed, it might be a good career move on my part to declare that the fact of figures of speech disproves the SMT, but I don’t see any contradiction inherent here. In fact, what I’m suggesting here and have argued for elsewhere is something that is a fairly basic observation from computer science and mathematical logic—that the distinction between operations and operands is not that distinct. I am merely suggesting that part of a mature linguistic knowledge is higher-order grammatical functions—functions that operate on other functions and/or yield other functions—and that, since any recursive system is probably able to represent higher-order functions, we should absolutely expect our grammars to allow for them.

Assuming this sort of abstraction is available and responsible for figures of speech, our task as theorists then is to figure out what form the abstraction takes, and how it is acquired, so I can stop comparing myself to Katie Crutchfield, Josh Gondelman, and AOC.

Why are there no Cartesian products in grammar?

This post, I think, doesn’t rise above the level of “musings.” I think there’s something here, but I’m not sure if I can articulate it properly.

An adequate scientific theory is one in which facts about nature are reflected in facts about the theory. Every entity in the theory should have an analogue in nature, relations in the theory should be found in nature, and simple things in the theory should be ubiquitous in nature. This last concern is at the core of minimalist worries about movement—early theories saw movement as complex and had to explain its ubiquity, while later theories see it as simple and have to explain the constraints on it. But my concern here is not minimalist theories of syntax, but model-theoretic semantics.

Model theories of semantics often use set-theory as their formal systems,[1]Yes, I know that there are many other types of model theories put forth so if they are adequate, then ubiquitous semantic phenomena should be simply expressible in set theory, and simple set-theoretic notions should be ubiquitous in semantics. For the most part this seems to be the case—you can do a lot of semantics with membership, subset, intersection, etc.—but obviously it’s not perfect. One point of mismatch is the notion of the Cartesian product (X × Y = {⟨x, y⟩ | xX, yY }) a very straightforward notion in set-theory, but one that does not have a neat analogue in language.

What do I mean by this? Well, consider the set-theoretic statement in (1) and its natural language translation in (2).

(1) P × P ⊆ R

(2) Photographers respect themselves and each other.

What set-theory expresses in a simple statement, language does in a compound one. Or consider (3) and (4) which invert the situation

(3) (P × P) − {⟨p, p⟩ | p ∈ P} ⊆ R

(4) Photographers respect each other.

The natural language expression has gotten simpler at the expense of its set-theoretic translation. This strikes me as a problem.

If natural language semantics is best expressed as set theory (or something similar), why isn’t there a simple bound expression like each-selves with the denotation in (5)?

(5) λX.λY (Y × Y ⊆ X)

What’s more, this doesn’t seem to be a quirk of English. When I first noticed this gap, I asked some native non-English speakers—I got data from Spanish, French (Canadian and Metropolitan), Dutch, Italian, Cantonese, Mandarin, Persian, Italian, Korean, Japanese, Hungarian, Kurdish, Tagalog, Western Armenian, and Russian[2]I’d be happy to get more data if you have it. You can email me, put it in the comments, or fill out this brief questionnaire.—and got fairly consistent results. Occasionally there was ambiguity between plural reflexives and reciprocals—French se, for instance, seemed to be ambiguous—but none of the languages had an each-selves.

My suspicion—i.e. my half-formed hypothesis—is that the “meanings” of reflexives and reciprocals are entirely syntactic. We don’t interpret themselves or each other as expressions of set-theory or whatever. Rather, sentences with reflexives and reciprocals are inherently incomplete, and the particular reflexive or reciprocals tells the hearer how to complete it—themselves says “derive a sentence for each member of the subject where that member is also the object”, while each other says “for each member of the subject, derive a set of sentences where each object is one of the other members of the subject.” Setting aside the fact that this, even to me, proposal is mostly nonsense, it still predicts that there should be an each selves. Perhaps making it sensible, would fix this issue, or vice versa. Or maybe it is just nonsense, but plenty of theories started as nonsense.

Notes

Notes
1 Yes, I know that there are many other types of model theories put forth
2 I’d be happy to get more data if you have it. You can email me, put it in the comments, or fill out this brief questionnaire.

Self-Promotion: I posted a manuscript to Lingbuzz.

Hi all,

I’ve been working on a paper for a few months and it’s finally reached the point where I need to show it to some people who can tell me whether or not I’m crazy. To that end, I posted it on LingBuzz.

It’s called “A workspace-based theory of adjuncts,” and be forewarned it’s pretty technical. So if you’re just here for my hot takes on why for-profit rent is bad, or what kind of science generative syntax is, or the like, it might not be for you.

If it is for you, and you have any comments on it, please let me know.

Happy reading!

What kind of a science is Generative Syntax?

Recently, I found myself reading Edmund Husserl’s Logical Investigations. I didn’t make it that far into it—the language is rather abstruse—but included in the fragments of what I did read was a section in which Husserl clarified something that I’ve been thinking about recently, which is the place of theory in a science. In the section in question, Husserl defines a science as a set of truths that belong together. So, the truths of physics belong together, and the truths of economics belong together, but the former and the latter don’t belong together. But what does it mean, Husserl asks, for truths to belong together?

Husserl’s answer is that it can mean one of two things. Either truths belong together because they share an internal unity or because they share an external unity. Truths—that is, true propositions—are linked by an internal unity if they are logically related. So, a theorem and the axioms that it is derived from share an internal unity, as would two theorems derived from a set of internally consistent axioms, and so on. The type of science characterized by internal unity, Husserl calls abstract, explanatory, or theoretical science. This class would include arithmetic, geometry, most modern physics, and perhaps other fields.

A set of truths has external unity if the members of the set are all about the same sort of thing. So, geography, political science, history, pre-modern physics, and so on would be the class of sciences characterized by external unity. Husserl calls these descriptive sciences.

When I read the description of this dichotomy, I was struck both by how simple and intuitive it was, and by how meaningful it was, especially compared to the common ways we tend to attempt to divide up the sciences (hard sciences vs soft sciences, science vs social science, etc). the distinction also happens to neatly divide fields of inquiry into those that generate predictions (theoretical sciences) and those that do not (descriptive sciences). Why does a theoretical science generate predictions while a descriptive one does not? Well consider the starting point of either of the two. A theoretical science, requiring internal unity, would start with axioms, which can be any kind of propositions, including universal propositions (e.g., “Every number has a successor”, “”No mass can be created or destroyed.”). On the other hand, a descriptive science, which require external unity, would start with observable facts, which must be particular propositions (e.g., “The GDP of the Marshall Islands rose by 3% last year”, “That ball fell for 5 seconds”). This matters because deductive reasoning is only possible if a systems has at least some universal premises. So, a theoretical science generates theorems, which constitute the predictions of that science. A descriptive science, on the other hand, is limited to inductive reasoning which at best generates expectations. The difference being that if a theorem/prediction is false, then at least one of the axioms that it is derived from must be false, while if an expectation is false, it doesn’t mean that the facts that “generated” that expectation are false.

Turning to the question I asked in my title, what kind of science is Generative Syntax (GS)? My answer is that there are actually two sciences—one theoretical, one descriptive—that answer to the name Generative Syntax, and that most of the current work is of the latter type. Note, I don’t mean to distinguish between experimental/corpus/field syntax and what’s commonly called “theoretical syntax”. Rather, I mean to say that, even if we restrict ourselves to “theoretical syntax,” most of the work being done today is part of a descriptive science in Husserl’s terminology. To be more concrete, let me consider two currently open fields of inquiry within GS. One which is quite active—Ergativity, and one which is less popular—Adjuncts.

Ergativity, for the uninitiated, is a phenomenon having to do with grammatical case. In English, a non-ergative language, pronouns come in two cases: nominative (I, he, she, they, etc), which is associated with subjects, and accusative (me, him, her, them, etc) which is associated with objects. An ergative language, also has two cases: ergative, which is associated with subjects of transitive verbs, and absolutive which is associated with objects of transitives and subjects of intransitives. To be sure, this is an oversimplification, and ergativity has been found to be associated with many other phenomena that don’t occur in non-ergative languages. Details aside, suppose we wanted to define a science of ergativity or, more broadly, a science of case alignment in Husserl’s terminology. What sort of unity would it have? I contend that it has only external unity. That is, it is a descriptive science. It begins with the fact that the case systems of some languages are different from the case systems that most linguistics students are used to. Put another way, if English were an ergative language, linguists would be puzzling over all these strange languages where the subjects always had the same case.

Adjuncts, a fancy term for modifiers, are the “extra” parts of sentences: adjectives and adverbs, the things newspaper editors hate. Adjuncts contrast with arguments (subjects, objects, etc) and predicates, which each sentence needs and needs in a particular arrangement. So, the sentences “She sang the song with gusto after dinner” and “She sang the song after dinner with gusto” are essentially identical, but “She sang the song” and “The song sang her” are wildly different. On its face, this is not particularly interesting—adjuncts are commonplace—but every unified theory of GS predicts that adjuncts should not exist. Take the current one, commonly called minimalism. according to this theory sentences are constructed by iterated application of an operation called Merge, which simply takes two words or phrases and creates a new phrase (Merge(X, Y) → {X, Y}≠X≠Y). It follows from this that “She sang the song” and “The song sang her” are meaningfully distinct but it also follows (falsely) that “She sang the song with gusto after dinner” and “She sang the song after dinner with gusto” are also meaningfully different. From this perspective, the study of adjuncts doesn’t constitute a science in itself, but rather it is part of a science with internal unity, a theoretical science.

So, despite the fact that research on ergativity and research on adjuncts both tend to be described as theoretical syntax in GS, the two are completely different sorts of sciences. Inquiry into the nature of adjuncts forms part of the theoretical science of syntax, while work on ergativity and, I would conjecture, the majority of current work that is called “theoretical syntax”, its use of formalisms and hypotheses notwithstanding, forms a descriptive science, which would be a part of a larger descriptive science.

Both sorts of science are valuable and, in fact, often complement each other. Accurate descriptions of the heavens were vital for early modern physicists to develop their theoretical models of mechanics, and novel theories often furnish descriptivists with better technology to aid their work. Where we get into trouble is when we confuse the two sorts of sciences. There’s an argument to be made, and and it has been made by John Ralston Saul in his book Voltaire’s Bastards, that many of the problems in our society stem from insisting that descriptive social sciences, such as international relations, economics, and law, and even much of the humanities have been treated like theoretical sciences.

Turning back to syntax and taking a micro view, why am I grinding this axe? Well, I have two main reasons: one selfish, the other more altruistic. The selfish reason is that I am a theoretician in a descriptivist’s world. This manifests itself in a number of ways, but I’ll just highlight the immediate one for me: the job market. The academic job market is insanely competitive, and PhD students are expected at least to present at conferences in order to make a name for themselves. This is a problem because (a) there are no theoretical syntax conferences and (b) a standard 20 minute talk, while often sufficient to present a new descriptive analysis of a phenomenon, is not ideal for presenting theoretical work.

Beyond that, I think the confusion of the two sorts of sciences can exacerbate imposter syndrome, especially in graduate students. It took me a while to figure out why I had such a hard time understanding some of my colleagues’ work, and why some papers on “theoretical syntax” had such wildly different characters, arguments, and styles from others. I eventually figured it out, but every so often I see a grad student struggling to make sense of the field and I just want to tell them that they’re not wrong, the field doesn’t really make sense, because it’s actually two fields.