The DP Hypothesis—a case study of a sticky idea

Recently, in service of a course I’m teaching, I had a chance to revisit and fully engage with what might be the stickiest idea in generative syntax—The DP hypothesis. For those of you who aren’t linguists, the DP hypothesis, though highly technical, is fairly simple to get the gist of based on a couple of observations:

Observation 1: Words in sentences naturally cluster together into phrases like “the toys”, “to the store”, or “eat an apple.”

Observation 2: In every phrase, there is a single main word called the head of the phrase. So, for instance, the head of the phrase “eat an apple” is the verb “eat.”

These observations are formalized in syntactic theory, so that “eat an apple” is labeled a VP (Verb Phrase), while “to the store” is a PP (Preposition Phrase). Which leads us to the DP hypothesis: Phrases like “the toys,” “a red phone,” or “my dog” should be labelled as DPs (Determiner Phrases) because their heads are “the,” “a,” and “my,” which are called determiners in modern generative syntax.

This is fairly counterintuitive, to say the least. The intuitive hypothesis—the one that pretty much every linguist accepted until the 1980s—is that those phrases are NPs (Noun Phrases), but if we only accepted intuitive proposals, there’d be no science to speak of. Indeed, the all the good scientific theories start off counterintuitive and become intuitive only by force of argument. One of the joys of theory is experiencing that shift of mind-set—it can feel like magic when done right.

So it was quite unnerving when I started reading the actual arguments for the DP hypothesis, which I had, at one point, fully bought into, and and began to become less convinced by each one. It didn’t feel like magic, it felt like a con.

My source for this is a handbook chapter by Judy Bernstein that summarizes the basic argument for the DP Hypothesis—a twofold argument consisting of a Parallelism argument and purported direct evidence of the DP Hypothesis— as previously advanced sand developed by Szabolcsi, Abney, Longobardi, Kayne, Bernstein herself, and others.

The parallelism argument is based on another counterintuitive theory developed in in the mid-20th century which states that clauses, previously considered either headless or VPs, are actually headed by abstract (i.e., silent) words. That is, they are variously considered TPs (Tense Phrases), IP’s (Inflection Phrases), or CPs (Complementizer Phrases). The parallelism argument states that “if clauses are like that, then ‘noun phrases’ be like that too” and then finds data where “noun phrases” look like clauses in some way. This might seem reasonable on its face, but it’s a complete non sequitur. Maybe the structure of a “noun phrase” parallels that of a clause, but maybe it doesn’t. In fact, there’s probably good reason to think that the structure of “noun phrases” is the inverse of the structure of the clause—the clause “projects” from the verb, and verbs and nouns are complementary, so shouldn’t the noun have complementary properties to the verb?

Following through on parallelism, if extended VPs are actually CPs, then extended NPs are DPs. Once you have that hypothesis, you can start making “predictions” and checking if the data supports them. And of course there is data that becomes easy to explain once we have the DP Hypothesis. Again, this is good as far as it goes, but there’s a key word missing—”only.” We need data that only becomes easy to explain once we have the DP Hypothesis. And while I don’t have competing analyses for the data adduced for the DP Hypothesis at the ready—though Ben Bruening has one for at least one such phenomenon—I’m not really convinced that none exist.

And that’s the foundation of the DP Hypothesis, a weak argument resting on another weak argument. Yet, it’s a sticky one—I can count on one hand the contemporary generative syntacticians that have expressed skepticism about it. Why is it so sticky? My hypothesis is that it’s useful as a shibboleth and as a “project pump”.

Its usefulness as a shibboleth is fairly straightforward—there’s no quicker way to mark yourself as a generative syntactician than to put DPs in your tree diagrams. Even I find it jarring to see NPs in trees.

To see the utility of the DP Hypothesis as a “project pump”, one need only to look at the Cartography/Nanosyntax literature. Once you open up a space for invisible functional heads between N and D, you seem to find them everywhere. This, I think, is what Chomsky meant when he described the DP Hypothesis as “…very fruitful, leading to a lot of interesting
work” before saying “I’ve never really been convinced by it.” Who cares if it’s correct, it contains infinite dissertations!

Now maybe I’m being to hard on the DP and its fans. After all, as far as theoretical avenues go, the DP Hypothesis is something of a cul de sac, albeit a large one—the core theory doesn’t really care whether “the bee” is a DP or and NP, so what’s the harm? I could point out that by maiking such a feeble hypothesis our standard, we’ve opened ourselves to being dunked on my anti-generativists. Or I could bore you with such Romantic notions as “calling all things by their right names.” Instead, I’ll be practical and point out that, contrary to contemporary digital wisdom, the world is not infinite, and every bit of real estate given to the DP cul-de-sac in the form of journal articles, conference presentations, tenure-track hires, etc. is space that could be used otherwise. And, to torture the metaphor further, shouldn’t we try to use our real estate for work with a stronger foundation?

Canada’s double standard in Israel-Palestine

The Canadian government will “continue to follow the case very closely.” Those were the words of Canada’s Minister of Foreign Affairs Mélanie Joly in response to The ICJ’s preliminary findings in South Africa’s genocide case against Israel. She does not mention of the fact that court’s preliminary orders indicate that charges of genocide against Israel are not, as Liberal MP Anthony Housefather puts it, “baseless.” Nor does she indicate any move to withdraw Canada’s support of Israel, or even make it contingent on Israel even pretending to comply with the court’s order that it prevent acts of genocide, acts such as murdering three palestinians in Gaza less than a day after being ordered to prevent such acts.

Compare this to the decision to pause funding of UNRWA—the UN agency responsible for providing relief to Palestinian refugees—following allegations by the Israeli government that UNRWA employees participated in the events of October 7th. For it’s part, UNRWA immediately fired three staff members and initiated an investigation. But instead of offering platitudes about watching the process closely, Minister of International Development Ahmed Hussen, immediately paused funding for UNRWA.

So, in one case, we have a legitimate international court saying that, upon hearing arguments for and against, there is a prima facie plausible case against the State of Israel on the charge of genocide, and Canada adopts a wait-and-see approach, even as Israel appears to be ignoring the court. While in another case, we have mere allegations against employees of a UN agency, and Canada’s response is immediate action against the UN agency, even as the agency appears to be taking these allegations very seriously.

The double standard couldn’t be more plain.

The “science” of modern “AI”

(or Piantadosi and MLMs again (II)—continuation of this post)

In my critique of Prof. Piantadosi’s manuscript “Modern language models refute Chomsky’s approach to language,” I point out that regardless of the respective empirical results of Generative Linguistics and MLMs, the latter does not supersede the former because the two have fundamentally different goals. Generative Linguistics aims to provide a rational explanation of a natural phenomenon while MLM are designed to simulate human language use. Piantadosi does not dispute this, but rather states that

… there is an interesting debate about the nature of science lurking here. The critics’ position seems to be that in order for something to be a scientific theory, it must be intuitively comprehensible to us. I disagree because there are many phenomena in nature which probably will never admit a
simple enough description for us to comprehend. We cannot just exclude these things from scientific inquiry.

p37 of v7 (emphasis in original)

Being one of the “critics” referred to here, I can grant the professor’s description of my position as basically accurate if a bit glib. But what is his position? He doesn’t say precisely, but we can make some inferences. In lieu of a clear statement of his position, for instance, Piantadosi follows the above quote with this:

There probably is no simple theory of a stock market (why IBM takes on a particular value) or dynamics in complex systems (why an O2 molecule hits a particular place on my eyeball). Certainly there are local, proximate causes (Tom Jones bid $142 for IBM; the O2 molecule was bumped by another), but when you start to trace these causes back into the complex system, you will quickly exceed our ability to understand the complex network of interactions.

p37 of v7

These are slightly bizarre comments, as we do have comprehensible (i.e., simple) theories of stock markets—the efficient markets hypothesis, for instance[1]This should not be taken as an endorsement of the efficient markets hypothesis—or any part of (neo)classical economics—as correct. A theory’s scientific-ness is no guarantee of its … Continue reading—and gases—the kinetic theory, for instance—which can give approximate predictions regarding real life events like the examples given. The professor’s view can be narrowed down slightly based onhis assertion that Rawski & Baumont (2023) “seem to misunderstand the linkage between experiment and theory” (p34 of v7)[2]This is a bold claim for Piantadosi to make given that he is a psychologist, while Lucie Baumont—the latter half of Rawski & Baumont—is an empirical astrophysicist. when they state that “Explanatory power, not predictive adequacy, forms the core of physics and ultimately all modern science.” (Rawski & Baumont 2023) It would seem clear, then, that, for Piantadosi at least, that a “theory” is scientific only insofar as it has predictive power.

This may seem like a reasonable characterization—despite myriad insinuations to the contrary, virtually no one believes that predictive power is unimportant—but as soon as one attempts to develop that characterization things get dicey. What, for instance, is the required level of accuracy and precision for science? And What sort of things should a true science be able to predict? To use one of Piantadosi’s examples, individual molecules are the primitives of the kinetic theory of gasses, and the theory makes precise predictions about the behaviour of a gas—i.e., gas molecules in aggregate—but it is highly doubtful that it would make predictions about the actual motion of a particular molecule in any situation. Surely, this would be too much to ask of any theory of physics, yet Piantadosi seems to believe it is within the realm of scientific inquiry.

There’s also a question of what it means to “predict” something. Piantadosi’s argument boils down to “MLMs are better than Chomsky’s approach theory, because they make more correct predictions,” yet nowhere does he explicitly say what those predictions are, nor does he document any tests of those predictions. Instead, we are treated to his prompts to a chatbot followed by the chatbot’s response. Perhaps these are the predictions. Perhaps they predict how a human would respond to such prompts. If so, then so much the worse for MLMs qua scientific theories because, even if MLMs were indistinguishable from humans, the odds of any two humans answering a single question the same way is vanishingly slim, and any way to determine a general similarity between utterances would almost certainly be either arbitrary or dependent on some theoretical framework. At best, MLMs simulate human language use, meaning they no more predict facts of language than a compass predicts facts of geometry.

Chomsky’s approach to theories of language, on the other hand makes clear predictions if one bothers to engage with it. The predictions are of the form “Given theoretical statement T, a competent speaker of language L will judge expression S as (un)acceptable in context C.” This is exactly the sort of prediction that one finds in other sciences—”if one performs precisely this action under precisely these conditions, one will observe precisely this reaction”—and the sort of prediction that is absent in Piantadosi’s paper.

Indeed these predictions seem to be absent in the entire contemporary “AI” discourse, and with good reason—”AI” is not a scientific enterprise. It’s an engineering project. A fact that is immediately obvious when one considers how it measures success—against a battery of predetermined arbitrary tests. MLM researchers, then, aren’t discovering truths, they’re building tools to spec, like good engineers.

This is not to cast aspersions on engineers, but it does raise a question—the core question: How exactly can an engineering project like MLMs refute a scientific theory like Generative Grammar?

Notes

Notes
1 This should not be taken as an endorsement of the efficient markets hypothesis—or any part of (neo)classical economics—as correct. A theory’s scientific-ness is no guarantee of its correctness.
2 This is a bold claim for Piantadosi to make given that he is a psychologist, while Lucie Baumont—the latter half of Rawski & Baumont—is an empirical astrophysicist.

Wrapping up 2023 with thanks, a pledge, and a humble request

As we approach the end of this year, I’ve had some time to reflect on my experience writing this blog, an experience that has been an overall positive. It’s allowed me to explore various ideas that would not have fit cleanly in any traditional academic linguistics forums—either because they are out of step with the mainstream, too nebulous, or either loosely or not-at-all related to linguistics. It’s opened a few doors for me that may not have been otherwise opened for me. And it’s given me an opportunity to interact with folks that I might not have otherwise. For this, I’m grateful and I want to extend a thank you to everyone who reads this blog and everyone who has reached out to me about its content, either in the comments or over email.

I’ve also decided that I need to write and publish more regularly, so starting in 2024, I plan to post a bit of writing at least once every two weeks.

Finally, I’ve realized how happy I am to have (relatively) complete control over my publishing platform especially as I see corporate-controlled platforms continually “pivot” to try to profit off of writers’ work. Having control over my platform, however, has costs—server costs, that is. I’d rather not run ads beside my work, and I’m not yet ready to run a Patreon (or equivalent), so in the mean time, I have a request for you. If you’ve enjoyed my writing thus far and have the means, please consider extending some financial support here: https://www.buymeacoffee.com/milway. Any little bit will be greatly appreciated.

Thank you for reading, and I’ll see you in 2024!

Piantadosi and MLMs again (I)

Last spring, Steven Piantadosi, professor of psychology and neuroscience, posted a paean to Modern Language Models (MLMs) entitled Modern language models refute Chomsky’s approach to language on LingBuzz. This triggered a wave of responses from linguists, including one from myself, pointing out the many ways that he was wrong. Recently, Prof. Piantadosi attached a postscript to his paper in which he responds to his critics. The responses are so shockingly bad, I felt I had to respond—at least to those that stem from my critiques—which I will do, spaced out across a few short posts.

In my critique, I brought up the problem of impossible languages, as did Moro et al. in their response. In addressing this critique, Prof. Piantadosi surprisingly begins with a brief diatribe against “poverty of the stimulus.” I say surprisingly, not because it’s surprising for an empiricist to mockingly invoke “poverty of stimulus” much in the same way as creationists mockingly ask why there are still apes if we evolved from them, but because poverty of stimulus is completely irrelevant to the problem of impossible languages and neither I nor Moro et al. even use the phrase “poverty of stimulus.”[1]For my part, I didn’t mention it because empiricists are generally quite assiduous in their refusal to understand poverty of stimulus arguments.

This irrelevancy expressed, Prof. Piantadosi moves on to a more on-point discussion. He argues that it would be wrong-headed for the constraints that would make some languages impossible to be encoded in our model from the start. Rather, if we start with an unconstrained model, we can discover the constraints naturally:

If you try to take constraints into account too early, you might have a harder time discovering the key pieces and dynamics, and could create a worse overall solution. For language specifically, what needs to be built in innately to explain the typology will interact in rich and complex ways with what can be learned, and what other pressures (e.g. communicative, social) shape the form of language. If we see a pattern and assume it is innate from the start, we may never discover these other forces because we will, mistakenly, think innateness explained everything

p36 (v6)

This makes a certain intuitive sense. The problem is that it’s refuted both by the history of generative syntax and the history of science more broadly.

In early theories, a constraint like “No mirroring transformations!” would have to be stated explicitly. Current theories, though, are much simpler with most constraints being derivable from the theory rather than tacked onto the theory.

A digression on scholarly responsibility: Your average engineer working on MLMs could be forgiven for not being up on the latest theories in generative syntax, but Piantadosi is an Associate Professor who has chosen to write a critique of generative syntax, so he really ought to know these things. In fact, he would only not know these thing by a conscious choice not to know or laziness.

Furthermore, the natural sciences have progressed thus far in precisely the opposite direction as what Piantadosi prescribes—they have started with highly constrained theories and progress has generally occurred when some constraint is questioned. Copernicus questioned the constraint that Earth stood still, Newton questioned the constraint that all action was local, Friedrich Wöhler questioned the constraint that organic and inorganic substances were inherently distinct.

None of this, of course, means that we couldn’t do science in the way that Piantadosi suggests—I think Feyerabend was correct that there is no singular Scientific Method—but the proof of the pudding is in the eating. Piantadosi is effectively making a promise that if we let MLM research run its course we will find new insights[2]He seems to contradict himself later on when he asserts that the “science” of MLMs may never be intelligible to humans. More on this in a later post. that we could not find had we stuck with the old direction of scientific progress, and he may be right—just as AGI may actually be 5 years away this time—but I’ll believe it when I see it.


After expressing his methodological objections to considering impossible languages, Piantdosi expresses skepticism as to the existence of impossible languages, stating ” More troubling, the idea of “impossible languages” has never actually been empirically justified.” (p37, v6) This is a truly astounding assertion on his part considering both Moro et al. and I explicitly cite experimental studies that arguable provide exactly the empirical justification that Piantadosi claims does not exist. Both studies cited present participants with two types of made-up languages—one which follows and one which violates the rules of language as theorized by generative syntax—and observes their responses as they try to learn the rules of the particular languages. The study I cite (Smith and Tsimpli 1995) compares the behavioural responses of a linguistic savant to those of neurotypical participants, while the studies cited by Moro et al. (Tettamanti et al., 2002; Musso et al., 2003) uses neuro-imaging techniques. Instead Prof. Piantadosi refers to every empiricists favourite straw-man argument—the alleged lack of embedding structures in Pirahã.

This bears repeating. Both Moro et al. and I expressly point to experimental evidence of impossible languages, and Piantadosi’s response is that no one has ever provided evidence of impossible languages.

So, either Prof. Piantadosi commented on mine and Moro et al‘s critiques without reading them, or he read them and deliberately misrepresented them. It is difficult to see how this could be the result of laziness or even willful ignorance rather than dishonesty.

I’ll leave off here, and return to some of Prof. Piantadosi’s responses to my critiques at a later time.

Notes

Notes
1 For my part, I didn’t mention it because empiricists are generally quite assiduous in their refusal to understand poverty of stimulus arguments.
2 He seems to contradict himself later on when he asserts that the “science” of MLMs may never be intelligible to humans. More on this in a later post.

The Descriptivist Fallacy

A recent hobby-horse of mine—borrowed from Norbert Hornstein—is the idea that the vast majority of what is called “theoretical generative syntax” is not theoretical, but descriptive. The usual response when I assert this seems to be bafflement, but I recently got a different response—one that I wasn’t able to respond to in the moment, so I’m using this post to sort out my thoughts.

The context of this response was that I had hyperbolically expressed anger at the title of one of the special sessions at the upcoming NELS conference—”Experimental Methods In Theoretical Linguistics.” My anger—more accurately described as irritation—was that, since experiment and theory are complementary terms in science, the title of the session was contradictory unless the NELS organizers were misusing the terms. My point, of course, was that the organizers of NELS—one of the most prestigious conferences in the field of generative linguistics—were misusing the terms because the field as a whole has taken to misusing the terms. A colleague, however, objected, saying that generative linguists were a speech community and that it was impossible for a speech community to systematically misuse words of its own language. My colleague was, in effect, accusing me of the worst offense in linguistics—prescriptivism.

This was a jarring rebuttal because, on the one hand, they aren’t wrong, I was being prescriptive. But, on the other hand and contrary to the first thing students are taught about linguistics, a prescriptive approach to language is not always bad. To see this, let’s consider the to basic rationales for descriptivism as an ethos.

The first rationale is purely practical—if we linguists want to understand the facts of language, we must approach them as they are, not as we think they should be. This is nothing more than standard scientific practice.

The second rationale is a moral one, stemming from the observation that language prescription tends to be directed at groups that lack power in society—Black English has historically been treated as “broken”, features of young women’s speech (“up-talk” in the 90s and “vocal fry” in the 2010s) is always policed, rural dialects are mocked. Thus, prescriptivism is seen as a type of oppressive action. Many linguists make it no further in thinking about prescriptivism, unfortunately, but there are many cases in which prescriptivism is not oppressive. Some good instances of prescriptivism—assuming they are done in good faith—are as follows:

  1. criticizing the use of obfuscatory phrases like “officer-involved shooting” by mainstream media
  2. calling out racist and antisemitic dog-whistling by political actors.
  3. discouraging the use of slurs
  4. encouraging inclusive language
  5. recommending that a writer avoid ambiguity
  6. Asking an actor to speak up

Examples 1 and 2 are obviously non-oppressive uses of prescriptivism, as they are directed at powerful actors; 3 and 4 can be acceptable even if not directed at a powerful person, because they attempt to address another oppressive act; and 5 and 6 are useful prescriptions, as they help the addressee to perform their task at hand more effectively.

Now, I’m not going to try to convince you that the field of generative syntax is some powerful institution, nor that the definition of “theory” is an issue of social justice. Here my colleague was correct—members of the field are free to use their terminology as they see fit. My prescription is of the third variety—a helpful suggestion from a member of the field that wants it to advance. So, while my prescription may be wrong, I’m not wrong to offer it.

Using anti-prescriptivism as a defense against critique is not surprising—I’m sure I’ve had that reaction to editorial suggestions on my work. In fact, I’d say it’s a species of a phenomenon common among folks who care about social justice, where folks mistake a formal transgression for a violation of an underlying principle. In this case the formal act of prescription occurred but without any violation of the principle of anti-oppression.

A response to Piantadosi (2023)

(Cross-posted on LingBuzz.)

It is perhaps an axiom of criticism that one should treat the object of criticism on its own terms. Thus, for instance, a photograph should not be criticized for its lack of melody. This axiom makes it difficult to critique a recent paper by Steven Piantadosi—hereafter SP—as it is difficult to determine what its terms are. It is ostensibly the latest installment of the seeming perennial class of papers that argue on the basis of either a new purported breakthrough in so-called AI or an exotic natural language dataset that rationalist theories of grammar are dead wrong, but it actually is a curious mix of criticism of Generative Grammar, promissory notes, and promotion for OpenAI’s proprietary ChatGPT chatbot.

The confusion begins with the title of the paper in (1) which doubles as its thesis statement and contains a category error.

(1) Modern language models refute Chomsky’s approach to language.

To refute something is show that it is false, but approaches do not have truth values. One can refute a claim, a theory, or a hypothesis, and one can show an approach to be ineffective, inefficient, or counterproductive, but one cannot refute an approach. The thesis of the paper under discussion, then, is neither true nor false, and we could be excused for ignoring the paper altogether.

Another axiom of criticism, though, is the principle of charity, which dictates that we present the best possible version of the object of our criticism. To that end we can split up (1) into two theses (2) and (3).

(2) Modern language models refute Chomsky’s theories language.
(3) Modern language models show Chomsky’s approach to language to be obsolete.

It is these theses that I address below.

The general shape of SP’s argument is as follows: (A) Chomsky claims that adult linguistic competence cannot be attained or simulated on the basis of data and statistical analysis alone. (B) The model powering ChatGPT simulates adult linguistic competence on the basis of data and statistical analysis alone. Therefore, (C) The model powering ChatGPT shows Chomsky’s claims to be false. To support his argument, SP presents queries and outputs from ChatGPT and argues that each refutes or approaches a refutation of a specific claim of Chomsky’s—each argument is of the form “Chomsky claims a purely statistical model could never do X, but ChatGPT can do (or can nearly do) X.”

As the hedging in this summary indicates, SP admits there are some phenomena for which ChatGPT does not exhibit human-like behaviour. For instance, when SP prompts the chatbot to generate ten sentences like (4), the program returns ten sentences all of which share the syntactic structure of (4), none of which are wholly meaningless like (4).

(4) Colorless green ideas sleep furiously.

SP explains this as away, writing “[w]e can note a weakness in that it does not as readily generate wholly meaningless sentences …, likely because meaningless language is rare in the training data.” Humans can generate meaningless language, despite the fact that is “rare in the
training data” for us too. The autonomy of syntax, then, is an instance where OpenAI’s language model does not exhibit human-like behaviour. Furthermore, SP notes that current models require massive amounts of data to achieve their results—amounts far outstripping the amount of data available to a child. He also notes that the data is qualitatively different from that available to a child.[1]SP also wrongly implies that the data that informs actual language acquisition consists of child-directed speech. In doing so, he admits that modern language models (MLMs) are not good models of the human language faculty, contradicting one of the premises of his argument.

Though these empirical shortcomings of models like the one powering ChatGPT quite plainly refute (2), we do not even need such evidence to do so, as (2) is self-refuting. It is self-refuting because it does not address theoretical claims that Chomsky or, to my knowledge, any
Generative theoretician has made. Far from claiming that MLMs could never do the things that ChatGPT can do, Chomsky has repeatedly claimed the opposite—that with enough data and computing power, a statistical model would almost certainly outperform any scientific theory in terms of empirical predictions. Indeed, this is the point of one the quotes that SP includes:

You can’t go to a physics conference and say: I’ve got a great theory. It accounts for everything and is so simple it can be captured in two words: “Anything goes.”

All known and unknown laws of nature are accommodated, no failures. Of
course, everything impossible is accommodated also.

Furthermore, Generative theories are about a component of human cognition[2]This is the crux of the I-/E-language distinction that Chomsky often discusses., and nowhere does SP claim that “modern language models” are good models of human cognition. Indeed, this is an extension of the above discussion of the data requirements of MLMs, and logically amounts to a claim that the supposed empirical successes of MLMs are illusory
without biological realism.

So, SP does not show that MLMs refute Chomsky’s theory, but what of his approach to language? Here we can look at the purported successes of MLMs. For instance, SP presents ChatGPT data showing grammatical aux-inversion in English, but provides no explanation as to how it achieves this. Such an explanation though, is at the core of Chomsky’s approach to language. If MLMs do not provide an explanation, then how can they supplant Chomsky’s approach?

The failure of MLMs to supplant Chomsky’s approach can be demonstrating by extending one of SP’s metaphors. According to SP, the approach to science used by MLMs is the same that is used to model and predict hurricanes and pandemics. Let’s assume this is true, it is also true
that meteorological and epidemiological models have at their cores, equations arrived at by theoretical/explanatory work done by physicists and biologists respectively. If MLMs supplant theoretical/explanatory linguistics, then hurricane and pandemic models should supplant physics and biology. No serious person would make this argument about physics or
biology, yet it is fairly standard in linguistics.

Thus far we have been taking SP’s data at face-value, and while there is absolutely no reason to believe that SP has falsified it in any way, there is still a serious problem with it—it is, practically speaking, unreplicable, since we have no access to the model that generated it.
The data in the paper was generated by ChatGPT in early 2023. When it was initially released, ChatGPT worked with the GPT 3.5 model, and has since been migrated to GPT 4—both of which are closed-source. So, while SP adduces ChatGPT data as evidence in favour of the sort of
models that he has developed as his research program, there is no way to know whether ChatGPT uses the same sort of model. Indeed, ChatGPT could be built atop a model based on Generative theories of language for all we know.

Returning to the axiom I started with—that one should criticize something on its own terms—The ultimate weakness of SPs paper, is its failure to follow it. Chomsky’s main critique of MLMs—alluded to in the quote above—is not that they are unable to produce grammatical expressions. It’s that if they were to be trained on data from an impossible language—a language that no human could acquire—they would “learn” that language just as easily as, say, English. One does
not need to look very far to find Chomsky saying exactly this. Take, for instance, the following quote in which Chomsky responds to a request for his critique of current so-called AI systems.[3]Taken from extemporaneous speech. Edited to remove false starts and other disfluencies. Source: https://www.youtube.com/watch?v=PBdZi_JtV4c

There’s two ways in which a system can be deficient. One way is it’s not strong enough—[it] fails to do certain things. The other way is it’s too strong—it does what it shouldn’t do. Well, my own
interests happen to be language and cognition—language specifically. So take GPT. Gary Marcus others have found lots of ways in which the system’s deficient—this system and others—[it] doesn’t do certain things. That can in principle at least be fixed—you add another trillion parameters double the number of terabytes and maybe do better. When a system is too strong it’s unfixable typically and that’s the problem with GPT and the other systems.

So if you give a database to the GPT system which happens to be from an impossible language—one that violates the rules of language—they’ll do just as well—often better because the rules can be simpler. For example one of the fundamental properties of the way language works—there’s good reasons for it—is that the rules the core rules ignore linear order of words—they ignore everything that you hear. They attend only to abstract structures that the mind creates So it’s very easy to construct impossible languages which use very simple procedures involving linear order of words [The] trouble is that’s not language but GPT will do just fine with them. so it’s kind of as if somebody were to propose uh say a revised version of the of the periodic table which included all the elements all the possible elements and all the impossible elements and didn’t make any distinction between them that wouldn’t tell us anything about elements. And if a system works just as well for impossible languages as for possible ones by definition not telling us anything about language. And that’s the way these systems—work it generalizes the other systems too. So the deep problem that concerns me is too much strength. I don’t see any conceivable way to remedy that.

The key notion here is that of an “impossible language” which, though it seems to have an a priori flavour to it, is actually an empirical notion. Generative theory, like every scientific theory, predicts not only what is possible, but what is impossible? For instance, generative theory predicts that linear order is not available to syntax, and therefore no language has grammatical rules based on linear order. SP indirectly addresses this concern:

It’s worth thinking about the standard lines of questioning generative syntax has pursued—things like, why don’t kids ever say “The dog is believed’s owners to be hungry” or “The dog is believed is hungry” […]. The answer provided by large language models is that these are not permitted under the best theory the model finds to explain what it does see. Innate constraints are not needed.

Following this standard empiricist reasoning, there are no impossible languages, only languages which have yet to be seen.[4]Setting aside languages which are logical impossibilities, like a
language which has and lacks determiners.
If all we had to go on was description of actually existing languages, then the empiricist and rationalist accounts would be equally plausible. Luckily for us, we are not limited in this way, we have experimental results that directly support the rationalist accounts—Smith and Tsimpli (1995), for instance, provides evidence that, while we can learn “impossible languages”, we do so in a fundamentally different way than we learn possible languages, with the former treated like puzzles rather than languages.

To summarize, SP purports to show that MLMs refute Chomsky’s approach to language—a logical impossibility. What he does show is that there are multiple aspects adult English competence that ChatGPT is unable to simulate, and the in the cases where ChatGPT was able to mimic an adult English speaker, there is no explanation as to how. Neither of these results are germane to either Chomsky’s approach to language or his theories of language, as Chomsky studies the human capacity for language, which MLMs tell us nothing about. More importantly, SP does not even address Chomsky’s actual critique of MLM qua models of
language competence.

Notes

Notes
1 SP also wrongly implies that the data that informs actual language acquisition consists of child-directed speech.
2 This is the crux of the I-/E-language distinction that Chomsky often discusses.
3 Taken from extemporaneous speech. Edited to remove false starts and other disfluencies. Source: https://www.youtube.com/watch?v=PBdZi_JtV4c
4 Setting aside languages which are logical impossibilities, like a
language which has and lacks determiners.

The problem with reporting on Bill C-18

About a year ago, Bill C-18—The Online News Act—was introduced into the Canadian House of Commons. On its face C-18 will require online platforms like Google, Facebook and Twitter, to negotiate with Canadian news organizations. The coverage of C-18, at least what I’ve been seeing, has been … weird. Since most Canadian news orgs have a vested interest in the outcome, they haven’t been reliable. Instead, the coverage comes from media critics like Jesse Brown, and Law professor Michael Geist, who are, perhaps, less conflicted about the bill and who have been fairly consistently and sharply critical of it.[1]I’m critical of Brown and Geist here, but not always. Brown has insightful takes on Canadian media more often than most other journalists, while Geist was somewhat heroic when it came to … Continue reading between the two of them they paint a picture of a Postmedia, Torstar, and other media conglomerates using the Liberal government to shake down tech platforms for subsidies, and that, while this shakedown might help the big guys, it will almost certainly harm independent news outlets and ordinary Canadians. Indeed, recent developments seem to have confirmed this story as Google has made moves to block news links from Canadians, with Facebook/Instagram following suit.

But there’s always been something that’s bothered me about these narratives—for all their correct Herman-Chomsky-esque analysis of news media as consisting of huge profit-seeking corporations, they seem to assiduously avoid turning that lens on the tech platforms. Take, for instance, Prof. Geist’s framing of the news that Facebook planned to block news sharing for Canadians:

Rather than calling it what it is—a giant multinational corporation run by a billionaire attempting to extort the duly elected government of Canada with the threat of a capital strike—Geist calls it “the Consequence” of the government doing its job and attempting to regulate a market, which implies that what is happening is simply the laws of nature at work—just as if you throw a ball X m/s at angle Y, it will trace parabola Z in the air, and if you strike a healthy person’s knee just so, they will kick, if you try to regulate a market, it will cease to function. The only agents in the story are the government and the media companies, and they’re playing with forces they are either too stupid to understand or too corrupt to acknowledge. Facebook and Google, or more accurately, their managers, are not agents here, or to the extent that they are agents, they’re good-faith agents trying to provide a service—the shop-owner to the government and big media’s racketeer.

This framing couldn’t be farther from the truth. Not only are Google, Facebook, etc. actors in this dispute, they are often bad actors. Take, for instance, the infamous pivot to video, when Facebook told news and entertainment publishers that, according to The Data, the best way for publishers to drive users to their sites—i.e., to their advertisers—was to make videos instead of written content. Of course it turned out that The Data was bullshit. As Cory Doctorow put it: “Big Tech Isn’t Stealing News Publishers’ Content It’s Stealing Their Money.” Google and Facebook are no innocent grocers being shook down.

Including the Big Tech firms as actors also puts in a new light another concern over C-18 that’s been brought up, usually by Jesse Brown—that Bill C-18 would create a government registry of news media, with only those in the registry benefiting from the ability to bargain with Big Tech. Any sort of state press registry, of course, is at least in tension with the notion of a free press, as the original notion of a free press was in opposition to restrictive press licensing regimes in monarchical societies.

Adding Big Tech into the mix, though, complicates the matter. Google and Facebook are able to credibly extort the governments because they have made themselves seem virtually indispensable to news media—the Big Tech “platforms” are how news gets disseminated. The threat to drop news was credible, for a more sinister reason too: Google and Facebook could actually do it—Google and Facebook know which sites publish news and they are able to shut them out of their platforms. Viewed this way, it’s hard to see the Big Tech “platforms” as anything but a potentially restrictive press registry, but a privately held registry, shielded from even the modicum of transparency and responsibility that an elected government has. Even if C-18 doesn’t require transparency or responsibility, it could serve as a precedent for further regulation of Big Tech.

But to be clear, I’m not here to defend Postmedia, Torstar, or the Liberal Party government of Canada. Big Media, as Herman and Chomsky have argued, consists of a handful of giant profit-seeking corporations, that have no interest in competition, preferring to have an oligopoly, while Justin Trudeau’s government mostly lurches from corruption scandal to corruption scandal, and in between it’s a bog-standard centrist administration, meaning it does virtually nothing for Canadians while saying the right things. I’m fairly sure the only reason they’ve remained in power for this long, is that the main opposition is obviously much worse.[2]There’s a narrative that probably stretches back to a time when the Whigs squared off against the Tories in which the left/liberal party pushes a country’s legislation forward and the … Continue reading

There don’t seem to be any good actors in this story, and that’s what makes it tough to talk about—that and the universality among Serious Commentators. of a particular assumption called capitalist realism, expressed by one of it’s greatest proponents as “There is no alternative”, and by its critics as fact that for most elites “it’s easier to imagine the end of the world than the end of capitalism.” We’re at a crisis-point in media industries. Big Tech and Big Media depend on each other—Big Tech needs media to serve to its users, while Big Media needs platforms to serve its products to the consumers—but they also compete with each other—both industries are funded by a finite pool of advertising money. This is an untenable situation, as capitalist competition means one firm trying to put the other firm out of business, an outcome that, in this case, would mean self-destruction. Serious Commentators will always struggle to properly explain the nature of this crisis, because it’s not the fault of any of the individual actors, but something inherent in capitalism, and under capitalist realism capitalism is like air or water—maybe it’s polluted or corrupted a bit, but the idea that there’s anything per se wrong with it is unimaginable.

There’s another problem with the Big Tech–Big Media relationship that conflicts with capitalist realism—Big Tech is clearly the dominant side, despite the fact that it depends on Big Media.[3]Big Tech arguably needs Big Media more than the other way around. Big Tech, as a player in the news industry, is a creature of the 21st century—Google News came out in 2002, Facebook in 2004, … Continue reading Such a situation is almost unthinkable under capitalist realism, as it’s almost axiomatic that relations of dominance are, in fact, derived from dependence—Capitalists “create jobs”, Landlords “provide housing”, Slave owners “feed, clothe, and shelter” enslaved people. This is why truisms like “you don’t need your job, your job needs you” are so subversive. So the idea that Facebook needs media firms and also can effectively dictate their business practices is nonsense, no matter how much the facts suggest it.

And again, I’m not saying that the coverage should flip, and take the side of Big Media and the Government—there are no good actors here. Rather, coverage should take the side of the people who are likely to be harmed bay any outcome—the actual journalists and the consumers of journalism. Indeed, it’s difficult to have a clear-eyed view of this and similar dust-ups and not adopt the slogan ¡Que se vayan todos! (“They can all go to hell!”). What would such an approach mean? It would mean coverage of that includes the context that Big Tech and Big Media are both a collection of monopolistic profit-seekers, that reminds us that Big Tech keeps committing fraud, that the Liberals promised us good things, including electoral reform, and reneged. This is all too much to hope for, but for a start, it would be nice for Serious Commentators to treat Big Tech as what it is—a cabal of monopolists threatening to punish Canadians for the crime of trying to regulate them.

Notes

Notes
1 I’m critical of Brown and Geist here, but not always. Brown has insightful takes on Canadian media more often than most other journalists, while Geist was somewhat heroic when it came to copyright and digital privacy in the earlier 21st century. Both seem incapable of seeing Big Tech clearly though, I suspect, having to do with their relations to the cycles of enshittification at Google and Facebook. Maybe I’ll write a separate post about that.
2 There’s a narrative that probably stretches back to a time when the Whigs squared off against the Tories in which the left/liberal party pushes a country’s legislation forward and the right/conservative party resists such moves. The reverse is now true: the right/conservative parties actively enact barbaric and anti-social policies, and the left/liberal parties, despite promises of rolling back said policies, mostly just do nothing when in power.
3 Big Tech arguably needs Big Media more than the other way around. Big Tech, as a player in the news industry, is a creature of the 21st century—Google News came out in 2002, Facebook in 2004, Twitter in 2006—while Big Media goes back much farther, and Big Tech has repeatedly gone out of their way to entice media firms to become more integrated in the tech platforms they control.

How do we get good at using language?

Or: What the hell is a figure of speech anyway?

At a certain level I have the same level of English competence as Katie Crutchfield, Josh Gondelman, and Alexandria Ocasio-Cortez. This may seem boastful to a delusional degree of me, but we’re all native speakers of a North American variety of English of a similar age, and this is the level of competence that linguists tend to care about. Indeed, according to our best theories of language, the four of us are practically indistinguishable.

Of course, outside of providing grammaticality judgements, I wouldn’t place myself anywhere near those three, each of whom could easily be counted among the most skilled users of English living. But what does it mean for people to have varied levels of skill in their language use? And is this even something that linguistic theory should be concerned about?

Linguists, of course, have settled on 5 broad levels of description of a given language

  1. Phonetics
  2. Phonology
  3. Morphology
  4. Syntax
  5. Semantics

It seems quite reasonable to say we can break down language skill along these lines. So, skilled speakers can achieve a desire effect by manipulating their phonetics, say by raising their voices, hitting certain sounds in a particular way, or the like. Likewise, phonological theory can provide decent analyses of rhyme, alliteration, rhythm etc. Skilled users of a language also know when to use (morphologically) simple vs complex words, and which word best conveys the meaning they intend. Maybe a phonetician, phonologist, morphologist, or semanticist, will disagree, but these seem like fairly straightforward to formalize, because they all involve choosing from among a finite set of possibilities—a language only has so many lexical entries to choose from. What does skill mean in the infinite realm of syntax? What does it mean to choose the correct figure of speech? Or even more basically, how does one express any figure of speech in the terms of syntactic theory?

It’s not immediately obvious that there is any way to answer these questions in a generative theory for the simple reason that figures of speech are global properties of expressions, while grammatical theory deals in local interactions between parts of expressions. Take an example from Abraham Lincoln’s second inaugural address:

(1) Fondly do we hope—fervently do we pray—that this mighty scourge of war may speedily pass away.

There are three syntactic processes employed by Lincoln here that I can point out:

(2) Right Node Raising
Fondly do we hope that this mighty scourge of war may speedily pass away, and fervently do we pray that this mighty scourge of war may speedily pass away. -> (1)

(3) Subject-Aux Inversion
Fondly we hope … -> (1)

(4) Adverb fronting
We hope fondly… -> (1)

Each of these represents a choice—conscious or otherwise—that Lincoln made in writing his speech and, while most generative theories allow for choices to be made, they are not at the same levels.

Minimalist theories, for instance, allow for choices at each stage of sentence construction—you can either move constituent, add a constituent, or stop the derivation. Each of (3) and (4) could conceivably be represented as a single choice, but it seems highly unlikely that (2) could. In fact, there is nothing approaching a consensus as to how right node raising is achievable, but it is almost certainly a complex phenomenon. It’s not as if we have a singular operation RNR(X) which changes a mundane sentence into something like (1), yet Lincoln and other writers and orators seem to have it as a tool in their rhetorical toolboxes.

Rhetorical skill of this kind suggest the possibility of a meta-grammatical knowledge, which all speakers of a language have to some extent, and which highly skilled users have in abundance. But what could this meta-grammatical knowledge consist of? Well, if the theoretical representation of a sentence is a derivation, then the theoretical representation of a figure of speech would be a class of derivations. This suggests an ability to abstract over derivations in some way and therefore, it suggests that we are able to acquire not just lexical items, but also abstractions of derivations.

This may seem to contradict the basic idea of Minimalism by suggesting two grammatical systems and indeed, it might be a good career move on my part to declare that the fact of figures of speech disproves the SMT, but I don’t see any contradiction inherent here. In fact, what I’m suggesting here and have argued for elsewhere is something that is a fairly basic observation from computer science and mathematical logic—that the distinction between operations and operands is not that distinct. I am merely suggesting that part of a mature linguistic knowledge is higher-order grammatical functions—functions that operate on other functions and/or yield other functions—and that, since any recursive system is probably able to represent higher-order functions, we should absolutely expect our grammars to allow for them.

Assuming this sort of abstraction is available and responsible for figures of speech, our task as theorists then is to figure out what form the abstraction takes, and how it is acquired, so I can stop comparing myself to Katie Crutchfield, Josh Gondelman, and AOC.

My Top Culture Things of 2022

It’s the end of 2022 and I’ve got nothing else to do, so I thought I’d share some of the works of culture that really made my year (even including things that weren’t made in 2022).

I did one of these before in 2019, but something happened (and continues to happen) and I missed the following two years, so a couple of these might be things I discovered in 2020 or 2021 but continued to really enjoy this year.

The Revolutions Podcast by Mike Duncan

The first episode or Revolutions came out in 2013 the series finale was just released on Christmas day of 2022. I started listening to it this year and managed to go through the entire catalogue. It’s a sprawling look at the revolutionary period that was kicked off by the English civil wars and ended with the Russian Revolution, including the revolutions in the US, Central/South America, Haiti, and Mexico, the several revolutions in France, and the revolutionary uprisings in 1848. I was initially skeptical of the idea of an American podcaster recounting revolutions, fearing it might end up being nothing but simplistic narratives, but I was pleasantly surprised by the nuance and detail that Duncan draws out of these histories. He covers the historical, political, and even ecological factors that shaped revolutions, and draws interesting connections and parallels between seemingly unrelated revolutions. If I had one critique it would be that, while Duncan certainly doesn’t endorse a Great Man theory of history, he does, in my opinion, give fairly short shrift to popular movements that lack a charismatic leader—anarchists in the Russian Revolution, Anti-Federalists in the US, The Diggers in the English civil wars, to name a few. This is certainly an unfair critique stemming from my own biases, and it in no way detracts from my enjoyment of the podcast.

Orwell’s Roses by Rebecca Solnit

“In the spring of 1936, a writer planted roses.” This is the opening line of Orwell’s Roses, and in some ways the puzzle that its author Rebecca Solnit is trying to solve—why would an apparently grim pessimist like Orwell bother with planting something as apparently frivolous as roses? The book is an exploration of both Orwell and roses and a refutation of their reputations as being grim and frivolous respectively. Solnit’s almost stream-of-consciousness style of writing belies the fact that she’s making an argument and backing it up with research and reason. The argument seems to be a perennial one on the left as to what place the non-material welfare of people should matter—should leftists be concerned with beautiful things like roses, or are such concerns ultimately bourgeois? Solnit is decidedly on the side of roses, and argues that Orwell was too.

The book somehow manages to be extremely readable but dense, poetic but journalistic . Definitely worth it.

Andor

I’ve been pretty much done with Star Wars for a few years now. I didn’t see The Rise of Skywalker and other than The Mandolorian—which I watched because I was out of things to watch in lockdown and would describe as “fine”—I’ve steered clear of the streaming shows. So when I heard they were making a series about the origin story of the second lead in Rogue One—A film I enjoyed—I thought “wow, they’re really scraping the bottom of the barrel here”, and boy was I wrong! Everything about the show feels fresh, its links to the Star Wars canon are so tenuous that it could almost not even be a Star Wars series, and it definitely has something to say. Even the choice of protagonist—Cassian Andor, the petty thief transformed into a revolutionary—is interesting precisely because, as Alan Sepinwall notes, Cassian might be the least compelling character in the show. But while Sepinwall sees this as a flaw, I can’t help but see it as a secret weapon. Because Cassian doesn’t hog the screen, the secondary and tertiary characters get to have their say and make their perspectives known. Andor, much like The Wire—a comparison already made by David Klion—is ultimately a social drama. It’s much more interested in exploring the links between capitalism, imperialism, colonialism and fascism, and the nuances of resistance and rebellion—the showrunner, Tony Gilroy, apparently listens to the Revolutions podcast—than any individual relationships, though it doesn’t shy away from exploring the personal impacts of the social.

Actual critics have done the show more justice than I can, but one last thing I want to highlight is the score by Nicholas Britell, which has the epic orchestral sweeps that you’d expect but also jarringly centers a wobbly detuned synth for much of the score, highlighting the fact that the world of the show is rather shaky—teetering on the brink of collapse. Again, really not something I expected from a Star Wars franchise.

The Sloppy Boys Podcast/The Blowout

The Sloppy Boys are a comedy party rock band consisting of Jeff Dutton, Mike Hanford, and Tim Kalpakis, all former members of The Birthday Boys sketch group. In 2020, just when COVID hit, they released their third album Paradiso, and without the possibility of touring to promote the album, they decided to start a cocktail podcast. Tale as old as time, really.

The premise of the show is simple: every week, the Boys make a new cocktail—the Trinidad Sour was an early classic—and talk about it. Add to that the fact that these are three good friends and some of the funniest guys on the planet and they legitimately make each other laugh and you’ve got an excellent podcast. They also have a second show The Blowout available to Patreon subscribers—patróns in the parlance of the show—where they talk about whatever they want—best guitar solo, taking a bath, going to the mall, the 80s movie Gremlins, the best Christmas aspect, to name a few. It’s sometimes truly the thinnest of premises but Jeff, Mike, and Tim always manage to make it great!

LIFE ON EARTH by Hurray For The Riff Raff

Hurray For The Riff Raff is the musical project of Alynda Segarra, a singer-songwriter originally from The Bronx, who formed the band when they moved to New Orleans. I first encountered Hurray For The Riff Raff in their 2017 album The Navigator—an album which you should absolutely seek out—and they released LIFE ON EARTH this year. While The Navigator was big and overtly political, LIFE ON EARTH kind of snuck up on me. It’s a smaller sort of album and much earthier than its predecessor—with titles like “WOLVES”, “RHODODENDRON“, “JUPITER’S DANCE” and “ROSEMARY TEARS“—but not devoid of politics—”PRECIOUS CARGO” tells the story of a migrant coming across the US/Mexico border only to be abused by US authorities. I don’t think it got much press, but when I was reviewing the music I’d listened to this year, I realized LIFE ON EARTH had really wormed its way into my rotation as one of my familiar records, even though it’s less than a year old.

Honourable mentions

The Time of Monsters Podcast with Jeet Heer.

Jeet Heer is a Canadian journalist and critic. On his podcast he talks with other commentators about some current topic in the news, politics, or culture.The podcast is also completely unpolished, which very much adds to its charm.

Everything Everywhere All at Once

A wonderfully unique movie amid what’s become the standard fare of Disney-owned IP and other studios trying to emulate/compete with Disney. Any attempt to describe the plot would do it a great disservice, so all I can say is you should watch it if you can.

Roses by Jadea Kelly

I met Jadea in undergrad where she would often perform at our college open mic. She was clearly talented so when she sent out a Kickstarter request to help fund her next album I was happy to throw in a few bucks. Flash forward several years to 2022, when I get notified that her album is complete and a CD is on its way to me. I didn’t know what to expect, but I was completely floored by what I heard—well-written songs with mature poignant lyrics beautifully performed and produced. An early standout and still one of my favourite tracks: “When I Fly”

Dan Padley

A jazz guitarist out of Iowa City IA, I met Dan through the Sloppy Boys discord server. He released an excellent solo EP this year as well as an LP with Jarrett Purdy, both are well worth a listen. He also regularly posts cool guitar covers of whatever songs he feels like on his Instagram and Youtube channel.