Piantadosi and MLMs again (I)

Last spring, Steven Piantadosi, professor of psychology and neuroscience, posted a paean to Modern Language Models (MLMs) entitled Modern language models refute Chomsky’s approach to language on LingBuzz. This triggered a wave of responses from linguists, including one from myself, pointing out the many ways that he was wrong. Recently, Prof. Piantadosi attached a postscript to his paper in which he responds to his critics. The responses are so shockingly bad, I felt I had to respond—at least to those that stem from my critiques—which I will do, spaced out across a few short posts.

In my critique, I brought up the problem of impossible languages, as did Moro et al. in their response. In addressing this critique, Prof. Piantadosi surprisingly begins with a brief diatribe against “poverty of the stimulus.” I say surprisingly, not because it’s surprising for an empiricist to mockingly invoke “poverty of stimulus” much in the same way as creationists mockingly ask why there are still apes if we evolved from them, but because poverty of stimulus is completely irrelevant to the problem of impossible languages and neither I nor Moro et al. even use the phrase “poverty of stimulus.”[1]For my part, I didn’t mention it because empiricists are generally quite assiduous in their refusal to understand poverty of stimulus arguments.

This irrelevancy expressed, Prof. Piantadosi moves on to a more on-point discussion. He argues that it would be wrong-headed for the constraints that would make some languages impossible to be encoded in our model from the start. Rather, if we start with an unconstrained model, we can discover the constraints naturally:

If you try to take constraints into account too early, you might have a harder time discovering the key pieces and dynamics, and could create a worse overall solution. For language specifically, what needs to be built in innately to explain the typology will interact in rich and complex ways with what can be learned, and what other pressures (e.g. communicative, social) shape the form of language. If we see a pattern and assume it is innate from the start, we may never discover these other forces because we will, mistakenly, think innateness explained everything

p36 (v6)

This makes a certain intuitive sense. The problem is that it’s refuted both by the history of generative syntax and the history of science more broadly.

In early theories, a constraint like “No mirroring transformations!” would have to be stated explicitly. Current theories, though, are much simpler with most constraints being derivable from the theory rather than tacked onto the theory.

A digression on scholarly responsibility: Your average engineer working on MLMs could be forgiven for not being up on the latest theories in generative syntax, but Piantadosi is an Associate Professor who has chosen to write a critique of generative syntax, so he really ought to know these things. In fact, he would only not know these thing by a conscious choice not to know or laziness.

Furthermore, the natural sciences have progressed thus far in precisely the opposite direction as what Piantadosi prescribes—they have started with highly constrained theories and progress has generally occurred when some constraint is questioned. Copernicus questioned the constraint that Earth stood still, Newton questioned the constraint that all action was local, Friedrich Wöhler questioned the constraint that organic and inorganic substances were inherently distinct.

None of this, of course, means that we couldn’t do science in the way that Piantadosi suggests—I think Feyerabend was correct that there is no singular Scientific Method—but the proof of the pudding is in the eating. Piantadosi is effectively making a promise that if we let MLM research run its course we will find new insights[2]He seems to contradict himself later on when he asserts that the “science” of MLMs may never be intelligible to humans. More on this in a later post. that we could not find had we stuck with the old direction of scientific progress, and he may be right—just as AGI may actually be 5 years away this time—but I’ll believe it when I see it.


After expressing his methodological objections to considering impossible languages, Piantdosi expresses skepticism as to the existence of impossible languages, stating ” More troubling, the idea of “impossible languages” has never actually been empirically justified.” (p37, v6) This is a truly astounding assertion on his part considering both Moro et al. and I explicitly cite experimental studies that arguable provide exactly the empirical justification that Piantadosi claims does not exist. Both studies cited present participants with two types of made-up languages—one which follows and one which violates the rules of language as theorized by generative syntax—and observes their responses as they try to learn the rules of the particular languages. The study I cite (Smith and Tsimpli 1995) compares the behavioural responses of a linguistic savant to those of neurotypical participants, while the studies cited by Moro et al. (Tettamanti et al., 2002; Musso et al., 2003) uses neuro-imaging techniques. Instead Prof. Piantadosi refers to every empiricists favourite straw-man argument—the alleged lack of embedding structures in Pirahã.

This bears repeating. Both Moro et al. and I expressly point to experimental evidence of impossible languages, and Piantadosi’s response is that no one has ever provided evidence of impossible languages.

So, either Prof. Piantadosi commented on mine and Moro et al‘s critiques without reading them, or he read them and deliberately misrepresented them. It is difficult to see how this could be the result of laziness or even willful ignorance rather than dishonesty.

I’ll leave off here, and return to some of Prof. Piantadosi’s responses to my critiques at a later time.

Notes

Notes
1 For my part, I didn’t mention it because empiricists are generally quite assiduous in their refusal to understand poverty of stimulus arguments.
2 He seems to contradict himself later on when he asserts that the “science” of MLMs may never be intelligible to humans. More on this in a later post.

The Descriptivist Fallacy

A recent hobby-horse of mine—borrowed from Norbert Hornstein—is the idea that the vast majority of what is called “theoretical generative syntax” is not theoretical, but descriptive. The usual response when I assert this seems to be bafflement, but I recently got a different response—one that I wasn’t able to respond to in the moment, so I’m using this post to sort out my thoughts.

The context of this response was that I had hyperbolically expressed anger at the title of one of the special sessions at the upcoming NELS conference—”Experimental Methods In Theoretical Linguistics.” My anger—more accurately described as irritation—was that, since experiment and theory are complementary terms in science, the title of the session was contradictory unless the NELS organizers were misusing the terms. My point, of course, was that the organizers of NELS—one of the most prestigious conferences in the field of generative linguistics—were misusing the terms because the field as a whole has taken to misusing the terms. A colleague, however, objected, saying that generative linguists were a speech community and that it was impossible for a speech community to systematically misuse words of its own language. My colleague was, in effect, accusing me of the worst offense in linguistics—prescriptivism.

This was a jarring rebuttal because, on the one hand, they aren’t wrong, I was being prescriptive. But, on the other hand and contrary to the first thing students are taught about linguistics, a prescriptive approach to language is not always bad. To see this, let’s consider the to basic rationales for descriptivism as an ethos.

The first rationale is purely practical—if we linguists want to understand the facts of language, we must approach them as they are, not as we think they should be. This is nothing more than standard scientific practice.

The second rationale is a moral one, stemming from the observation that language prescription tends to be directed at groups that lack power in society—Black English has historically been treated as “broken”, features of young women’s speech (“up-talk” in the 90s and “vocal fry” in the 2010s) is always policed, rural dialects are mocked. Thus, prescriptivism is seen as a type of oppressive action. Many linguists make it no further in thinking about prescriptivism, unfortunately, but there are many cases in which prescriptivism is not oppressive. Some good instances of prescriptivism—assuming they are done in good faith—are as follows:

  1. criticizing the use of obfuscatory phrases like “officer-involved shooting” by mainstream media
  2. calling out racist and antisemitic dog-whistling by political actors.
  3. discouraging the use of slurs
  4. encouraging inclusive language
  5. recommending that a writer avoid ambiguity
  6. Asking an actor to speak up

Examples 1 and 2 are obviously non-oppressive uses of prescriptivism, as they are directed at powerful actors; 3 and 4 can be acceptable even if not directed at a powerful person, because they attempt to address another oppressive act; and 5 and 6 are useful prescriptions, as they help the addressee to perform their task at hand more effectively.

Now, I’m not going to try to convince you that the field of generative syntax is some powerful institution, nor that the definition of “theory” is an issue of social justice. Here my colleague was correct—members of the field are free to use their terminology as they see fit. My prescription is of the third variety—a helpful suggestion from a member of the field that wants it to advance. So, while my prescription may be wrong, I’m not wrong to offer it.

Using anti-prescriptivism as a defense against critique is not surprising—I’m sure I’ve had that reaction to editorial suggestions on my work. In fact, I’d say it’s a species of a phenomenon common among folks who care about social justice, where folks mistake a formal transgression for a violation of an underlying principle. In this case the formal act of prescription occurred but without any violation of the principle of anti-oppression.

Unmoored theory

I’ve written before about the dichotomy of descriptive vs theoretical sciences, but I’ve recently noticed another apparent dichotomy within theoretical sciences—expansionary vs focusing sciences. Expansionary sciences are those whose domain tends to expand—(neo)classical economics seems to claim all human interaction in its domain; formal semantics now covers pragmatics, hand gestures, and monkey communication—while focusing sciences tend to rather constant domain or even a shrinking one—chemistry today is about pretty much the same things as it was in the 17th century; generative syntactic theory is still about the language faculty. Assuming this is true,[1]It’s pretty much a tautology that a science’s domain will either grow, stay constant, or shrink over time the question is, whether it reflects some underlying difference between these sciences. I’d like to argue that the distinction follows from how firm its foundations are, and in particular what I’ll call its empirical conjecture.

Every scientific theory, I think, basically takes the form of a conjoined sentence “There are these things/phenomena in the world and they act like this.” The second conjunct is the formal system that give a theory its deductive power. The first conjunct is the empirical conjecture, and it turns the deductions of the formal system into predictions. While every science that progresses does so by positing new sorts of invisible entities, categories, etc., they all start with more or less familiar entities, categories, etc.—planets, metals, persons, etc. This link to the familiar, is the empirical foundation of a science. Sciences with a firm foundation are those whose empirical conjecture can be uncontroversially explained to a lay person or even an expert critic operating in good faith.

Contemporaries of, say, Robert Boyle might have thought the notion of corpuscles insanity, but they wouldn’t disagree that matter exists, exists in different forms, and that some of those forms interact in regular ways. Even the fiercest critic of UG, provided they are acting in good faith, would acknowledge that humans have a capacity for language and that that capacity probably has to do with our brains.

The same, I think, cannot be said about (neo)classical economics or formal semantics.[2]Now obviously, there’s a big difference between the two fields—neoclassical economics is extremely useful to the rich and powerful since it let’s them justify just about any … Continue reading Classical economics starts with the conjecture that there are these members of the species homo economicus—the perfectly rational, self-interested, utility maximizing agent—and derives theorems from there. This is obviously a bad characterization of humans. It is simultaneously too dim of a view of humans—we behave altruistically and non-individualistically all the time—and one that gives us far too much credit—we are far from perfectly rational. Formal semantics, on the other hand, starts with the conjecture that meaning is reference—that words have meaning only insofar as they refer to things in the world. While not as obviously false as the homo economicus conjecture, the referentialist conjecture is still false—most words, upon close inspection, do not refer[3]I could point you to my own writing on this, the works of Jerrold Katz, and arguments from Noam Chomsky on referentialsm, or I could point out that one of the godfathers of referentialism, Ludwig … Continue reading, and there is a whole universe of meaning that has little to do with reference.

Most economists and semanticists would no doubt object to what the previous paragraph says about their discipline, and the objections would take one of two forms. Either they would defend homo economicus/referentialism, or they would downplay the importance of the conjecture in question—“Homo economicus is just a useful teaching tool for undergrads. No one takes it seriously anymore!”[4]Though, as the late David Graeber pointed out, economists never object when homo economicus is discussed in a positive light. “Semanticists don’t mean reference literally, we use model theory!”—and it’s this sort of response that I think can explain the expansionary behaviour of these disciplines. Suppose we take these objections to be honest expressions of what people in the field believe—that economics isn’t about homo economicus and formal semantics isn’t about reference. Well then, what are they about? The rise of behavioural economics suggests that economists are still looking for a replacement model of human agency, and model theory is basically just reference delayed.

The theories, then, seem to be about nothing at all—or at least nothing that exists in the real world—and as a result, they can be about anything at all—they are unmoored.

Furthermore, there’s an incentive to expand your domain when possible. A theory of nothing obviously can’t be justified by giving any sort of deep explanation of any one aspect of nature, so it has to be justified by appearing to offer explanations to a breadth of topics. Neoclassical economics can’t seem to predict when a bubble will burst, or what will cause inflation, but it can give what looks like insight into family structures. Formal semantics can’t explain why “That pixel is red and green.” is contradictory, but it provides a formal language to translate pragmatics into.

There’s a link here to my past post about falsification, because just as a theory about nothing can be a theory about anything, a theory about nothing cannot be false. So, watch out—if your empirical domain seems to be expanding, you might not be doing science any more.

Notes

Notes
1 It’s pretty much a tautology that a science’s domain will either grow, stay constant, or shrink over time
2 Now obviously, there’s a big difference between the two fields—neoclassical economics is extremely useful to the rich and powerful since it let’s them justify just about any horrendous crimes they would want to commit in the name of expanding their wealth and power, while formal semantics is a subdiscipline of a minor oddball discipline on the boundaries of humanities, social science, and cognitive science. But I’m a linguist, and I think mostly linguists read this.
3 I could point you to my own writing on this, the works of Jerrold Katz, and arguments from Noam Chomsky on referentialsm, or I could point out that one of the godfathers of referentialism, Ludwig Wittgenstein, seems to have repudiated it in his later work.
4 Though, as the late David Graeber pointed out, economists never object when homo economicus is discussed in a positive light.

What does falsification look like anyway?

Vulcan vs Neptune

There’s an argument that plays out every so often in linguistics the goes as follows:

Critic: This data falsifies theory T.
Proponent: Not necessarily, if you consider arguments X,Y, and Z.
Critic: Well, then theory T seems to be unfalsifiable!

This is obviously a specious argument on the part of the critic, since unfalsified does not entail unfalsifiable, but I think it stems from a very understandable frustration—theorists often have an uncanny ability to wriggle free of data that appears to falsify their theories, even though falsificationism is assumed by a large majority of linguists. The problem is that the logic falsificationism, while being quite sound, maybe unimpeachable, turns out to be fiendishly difficult to apply.

At its simplest, the logic of falsificationism says that a theory is scientific insofar as one can construct a basic statementi.e., a statement of fact—that would contradict the theory. This, of course, is an oversimplification of Karl Popper’s idea of Critical Rationalism in a number of ways. For one, falsifiability is not an absolute notion. Rather, we can compare the relative falsifiability of two theories by looking at what Popper calls their empirical content—the number of basic statements that would contradict them. So if a simple theoretical statement P has a particular empirical content, then the conjunction P & Q will have a greater empirical content, and the disjunction P v Q will have a lesser empirical content. This is a useful heuristic when constructing or criticizing a theory internally, and seems like a straightforward guide to testing theories empirically. Historically, though, this is not the case, largely because it is often difficult to recognize when we’ve arrived at and accurately formulated a falsifying fact. In fact, it is often, maybe always, the case that we don’t recognize a falsifying fact as such until after one theory has been superseded by another.

Take for instance the case of the respective orbits of Mercury and Uranus. By the 19th century, Newtonian mechanics had allowed astronomers to make very precise predictions about the rotations of the planets, and based on those predictions, there was a problem: two of the planets were misbehaving. First, it was discovered that Uranus—then the last known planet from the sun—wasn’t showing up where it should have been. Basically, Newton’s mechanics predicted that on such and so day and time Uranus would be in a particular spot in the sky, but the facts were otherwise. Rather than cry “falsification!”, though, the astronomers of the day hypothesized an object on the other side of Uranus that was affecting its orbit. One such astronomer, Urbain Le Verrier was even able to work backwards and predict where that object could be found. So in September of 1846, armed with Le Verrier’s calculations, Johann Gottfried Galle, was able to observe an eighth planet—Neptune. Thus, an apparent falsification became corroboration.

Urbain Le Verrier (1811-1877)
Johann Galle (1812-1910)

I’ve previously written about this story as a vindication of the theory first approach to science. What I didn’t write about, and what is almost never discussed in this context is Le Verrier’s work on the misbehaving orbit of Mercury. Again, armed with Newton’s precise mechanics, Le Verrier calculated the Newtonian prediction for Mercury’s orbit, and again[1]Technically though, Le Verrier’s work on Mercury predated his work on Uranus Mercury didn’t behave as expected. Again, rather than throw out Newtonian mechanics, Le Verrier hypothesized the planet Vulcan between Mercury and the sun, and set about trying to observe it. While many people claimed to observe Vulcan, none of these observations were reliably replicated. Le Verrier was undeterred, though, perhaps because observing a planet that close to the sun was quite tricky. Of course, it would be easy to paint Le Verrier as an eccentric—indeed, his Vulcan hypothesis is somewhat downplayed in his legacy—but he doesn’t seem to have been treated so by his contemporaries. The Vulcan hypothesis wasn’t universally believed, but neither does it seem to be the Flat-Earth theory of its day.

It was only when Einstein used his General Theory of Relativity to accurately calculate Mercury’s orbit, that the scientific community seems to have abandoned the search for Vulcan. Mercury’s orbit is now considered a classical successful test of General Relativity, but why don’t we consider it a refutation of Newtonian Mechanics? Strict falsificationism would seem to dictate that, but then a strict falsificationist would have thrown out Newtonian Mechanics as soon as we noticed Uranus misbehaving. So, falsificationism of this sort leads us to something of a paradox—if a single basic statement contradicts a theory, there’s no way of knowing if there is some second basic statement that, in conjunction with the first, could save the theory.

Still, it’s difficult to toss out falsification entirely, because a theory that doesn’t reflect reality, may be interesting but isn’t scientific.[2]Though sometimes, theories which seem to be empirically idle end up being scientifically important (cf. non-Euclidean geometry) Also, any reasonable person who has ever tried to give an explanation to any phenomenon, probably rejects most of their own ideas rather quickly on empirical bases. We should instead adopt falsificationism as a relative notion—use it when comparing multiple theories. So, Le Verrier was ultimately wrong, but acted reasonably—he had a pretty good theory of mechanics so he worked to reconcile it with some problematic data. Had someone developed General Relativity in Le Verrier’s time, then it would have been unreasonable to insist that a hypothesized planet was a better explanation than an improved theory.

Returning to the hypothetical debate between the Critic and the Proponent, then, I think a reasonable albeit slightly rude response for the proponent would be “Well, do you have a better theory?”

Notes

Notes
1 Technically though, Le Verrier’s work on Mercury predated his work on Uranus
2 Though sometimes, theories which seem to be empirically idle end up being scientifically important (cf. non-Euclidean geometry)

On the notion of an intellectual coup

In chapter nine of his book Goliath: The 100-Year War Between Monopoly Power and Democracy, Matt Stoller recounts the story of the genesis of the Chicago School of law & economics—the school of thought which has come to dominate virtually every aspect of the Western power structure since the 1970s. In Stoller’s telling, it truly could be considered a moment of epoch in economics, law, political science, and related disciplines, much as the Copernican geocentrism was for physics, or Mendel’s laws were for biology, or Generative Grammar was for psychology. The shift in thinking brought on by the Chicago school was perhaps as drastic and far-reaching as those brought on by these intellectual revolutions. Yet, in reading it, it struck me that it would wrong to describe the founding of the Chicago school as a revolution because it wasn’t one—it was an intellectual coup.

But what makes something an intellectual revolution? What makes it an intellectual coup? To stick with the analogy to political processes, the difference is legitimacy—revolutions are legitimate changes, while coups are illegitimate. Legitimacy, of course, is hard to judge objectively, but still, to call something a revolution is to judge it to be legitimate. The violent 1973 overthrow of the democratically elected Allende government in Chile is commonly called a “coup” rather than a revolution. Similarly, Historian Michael J. Klarman refers to the US Constitutional Convention as a coup to indicate that he judges it to have been illegitimate. And importantly, the revolution-coup distinction doesn’t boil down to the simple subjective value judgement of revolutions are good and coups are bad. So, while conservatives the world round, likely agree that the American Revolution was good, many argue that the French and Russian revolutions were bad. Interestingly, though, I don’t know that many people would think that a coup could be good. So, while most Americans would probably say the Constitutional convention is good, they probably wouldn’t describe it as a coup, perhaps because illegitimacy is per se bad.

So what makes a shift of ideas illegitimate—what makes it an intellectual coup? To see this we should look at what a legitimate shift looks like. The stories we’re used to hearing involve a disinterested person (or possibly a group) proposing a new idea in an open forum, while make an honest critical argument that it is superior to a contemporaneously widely-accepted idea. The proposal must be open, so that fair criticisms can be aired. The proposer should be disinterested in the sense that the proposed idea is not a means to some other material end (e.g., money or political influence), but rather an end in itself. The discourse around the idea should acknowledge and address the ideas antecedents and rivals, because it allows the larger community to accurately assess the merits of new idea.

We can see all of these criteria in the great shifts in the history of ideas. Even Galileo and Copernicus, whose work predated any of the modern intellectual institutions—like peer-reviewed journals, conferences, or universal primary education—that we all take for granted, opened their work to criticism—not by their peers primarily, but the Inquisition—and did so, not as a means to an end but for the sake of the ideas themselves—what self-interested person would open themselves to the punishment that a renaissance inquisition could dole out. Finally, it would be hard to credibly suggest that the early heliocentrists could ignore or misrepresent their intellectual competitors, which had been taken as a religious dogma, uncritically believed by their contemporaries. The very story of the Copernican revolution is one of competing ideas.

An illegitimate shift would go against one or more of these criteria. It would develop an idea in a less-than-open way; it would be put forth on behalf of some interest group, or as a means to an end for the proposer; or it would either ignore or caricature its competitor-ideas. And more often than not, the latter infraction will be the most characteristic feature of an intellectual coup. Taking the rise of the Chicago School, and its views on monopoly and antitrust, as Stoller recounts it as our prototype, we can see all of these features in play.

The story starts with wealthy businessman and New Deal enemy Harold Luhnow using his foundation The Volker Fund to finance a right-wing research project at the University of Chicago, starts continues with the project’s leading academic Aaron Director gathering a cadre of acolytes and eventually using private funds to start a journal that would be friendly to their ideas. What really allowed the Chicago School to change from a fringe endeavour to the dominant school of thought in the Western social sciences, in Stoller’s assessment, were a pair of rhetorical misappropriations: Adopting “the language of Jeffersonian democracy” and “the apolitical language of science.”

Jeffersonian democracy was in favour of the rights of the individual in opposition to centralized power, a stance that comes from Classical Liberalism and that the Chicago School loudly endorsed. The rhetorical trick, though, is that the Chicago School (and modern right-libertarians) treated authoritarian institutions like corporations as individuals and democratic institutions like labour unions as centralized power. Yet, even a cursory glance at many of the paragons of classical liberalism shows a number of views that we would now associate with a radical left-wing position. Some of Marx’s economic ideas come almost directly from Adam Smith, ideas like the labour theory of value, or the essentially parasitic nature of landlords. Of course, these views of Smith that don’t jibe with the right-wing caricature of him are either ignored or treated as a source of embarrassment. This move, of course, was aided by the fact that, by the time the right-wing Chicago School was appropriating the classical liberal tradition, the American left seemed to be pushing that tradition away. In fact, a recurring theme in Stoller’s is that the left has largely ceded populism to the right and embraced elitism.

Using the rhetoric of “science”, though, has probably been a much more powerful trick, because the general public including much of the elite’s attitude toward it is about as positive as its understanding of the term is murky. Nearly everyone—even flat-earthers, anti-vaxxers, and climate deniers—thinks science is good, but no one could define it. Sure, some would say something about experimental methods, or falsificationism, or spout some Kuhnian nonsense, and everyone would probably agree that quantum physics is a science, while film criticism is not, but few probably realize that philosophers of science have been consistently unable to pin down what constitutes a science. So, when an economist throws graphs and equations at us and declares scientific a statement that offends common sense, very few people are intellectually equipped to dispute them. In the case of the Chicago School, they were at an advantage because, until they adopted it, the claim that economics (along with politics, law, and history) could be a science like physics was probably only held by strict Marxists. The opposing position was one that worried about notions like power and democracy—hardly the kinds of ideas amenable to scientific analysis. If you think that Google doesn’t really compete in an open market, but uses its market power to crush all competition, then you probably also think the sun revolves around the earth.

While the moneyed interests backing the Chicago School and its insular nature in the early days certainly indicate that it was not likely to lead a legitimate intellectual shift, its rhetorical tricks, I believe, are what makes its success a coup rather than a revolution, and what has made its ideas so stubborn. It fosters the oppressive slogan “There is no alternative.” By co-opting the great thinkers of the enlightenment, the Chicago School can paint any opponents as anti-rational romantics, and by misappropriating the language of science, they can group dissenters with conspiracy theorists and backwards peasants. This makes it seem like a difficult position to argue against, but as many have discovered recently, it’s a surprisingly brittle position.

Take, for instance, the Chicago School position on antitrust laws—that they were intended as a consumer protection. This has been the standard position of antitrust enforcers in the U.S. and it’s based on an article by Robert Bork. It’s how obvious monopolists, like Google and Facebook have escaped enforcement thus far. But, as Stoller’s book documents, the actual legislative intent of U.S. antitrust laws had nothing to do with consumer welfare, and everything to do with power. Bork’s article, then, was a work of fiction, and once you understand that, the entire edifice of modern antitrust thinking begins to crumble.

So, the Chicago School carried out an intellectual coup—one that struck virtually every aspect of our society—but have there been intellectual coups in other fields? Two spring to mind for me—one in physics, and one in my own field of linguistics. Before I describe them, though, a brief word on motivations as an aspect of intellectual coups is in order.

One of the features of an intellectual coup that I described above is that of an ulterior motive driving it. In the case of the Chicago School it was driven by capitalists set on dismantling the New Deal for their own financial interests. Does that mean that everyone who subscribes to the Chicago School does so so that billionaires can make more money? Not at all. There are definitely Chicago Schoolers who are true believers. Indeed, I would wager that most, if not all, of them are. Hell, even political coups have true believers in them. What about the particular ulterior motives? Are all intellectual coups done on behalf of capital? No. Motivations take all sorts of forms, and are often subconscious. Bold claims are often rewarded with minor celebrity or notoriety which might have material benefits like job offers or the like. They are also sometimes correct. So, if a researcher makes a bold claim, are they doing so to stand out among their peers or are doing so because they truly believe the claim? It’s almost never possible to tell. Since intellectual coups are essentially based on intellectual dishonesty and its probably a safe choice to assume that those that enact an intellectual coup are capable and well-meaning people, discussions of motivations are useful to understand how a capable and well-meaning person could get caught up in a coup. As such, I will focus more on the means rather than the motive when diagnosing a coup.

The Copenhagen Quantum Coup

If you’re at all interested in the history of science, you may have heard of the Bohr-Einstein debate. The narrative that you likely heard was that in the early 20th century, the world community of physicists had accepted quantum mechanics with a single holdout, Albert Einstein, who engaged Niels Bohr in a debate at the 5th Solvay Conference in 1927. Einstein made a valiant argument, capping it with the declaration that “God does not play dice!” When it was Bohr’s turn, he wiped the floor with Einstein, showing that the old man was past his prime and out of step with the new physics. He even used Einstein’s own theory of relativity against him! And with that, Quantum mechanics reigned supreme, relegating all critics to the dustbin of history.

It’s a good story and even has a good moral about the fallibility of even a genius like Einstein. The trouble, though, at least according to Adam Becker in his excellent book What is Real?, is that the debate didn’t go down like that. For starters, Einstein wasn’t skeptical about quantum mechanics, but rather had questions about how we are to interpret it. Bohr was advocating for what’s misleadingly called “the Copenhagen Interpretation” which basically says that there is no way to give quantum theory a realist interpretation, all we can do is solve the equations and compare the solutions to experimental results. Furthermore, as Becker recounts, Einstein’s arguments weren’t out of step with contemporary physics. In fact, they were brilliantly simple thought experiments that struck at the very core of quantum mechanics. Their simplicity, however, meant that they sailed over the heads of Bohr and his cadre. It was Bohr’s response that missed the point. And finally, that famous quote from Einstein was in a letter to his friend Max Born, not at the conference in question.

This certainly has the hallmarks of an intellectual coup—it depends on a rhetorical trick of manipulating a narrative to favour one outcome, it shuts down debate by lumping dissenters in with the anti-rationalists, and it’s rather brittle—but it’s not quite as bald-faced as the Chicago School coup. Even as Becker tells it, the scientists in Bohr’s camp probably believed that Einstein was losing it and that he’s missed the point entirely. What’s more, the Copenhagen perspective, which the popularized telling of the debate supports, is not a pack of falsehoods like the Chicago School, but rather an overly narrow conception on the nature of scientific inquiry—a conception called “instrumentalism” which tends to banish humanistic questions of truth, reality, and interpretation to the realm of philosophy and views “philosophy” as a term of abuse.

But where is the dishonesty that I said every coup was based on? It seems to have come in the form of laziness—Bohr and his compatriots should have made a better effort to understand Einstein’s critique. This laziness, I believe, rises to the level of dishonesty, because it ended up benefiting the Copenhagen perspective in a predictable way. As Becker describes, Bohr, for various reasons, wanted to show that Quantum Mechanics as formulated in the 1920s was complete and closed—a perfect theory. Paradoxes and interpretive issues, such as the ones that Einstein was raising, revealed imperfections, which had to be ignored. Whether Bohr had all of this in his mind at the Solvay Conference is beside the point. His, and his followers’, was a sin of omission.

The Formal Semantics Coup

The standard theoretical framework of contemporary semantics, at least within the generativist sphere, is known as formal semantics. Few semanticists would likely agree that there is such thing as a standard theory, but those same semanticists probably agree on the following:

  1. The meaning of a word or a phrase is the thing or set of things that that word or phrase refers to.
  2. The meaning of a sentence is its truth conditions.
  3. Linguistic meanings can be expressed by translating expressions of a Natural Language into formulas of formal logic.
  4. Any aspect of language that doesn’t meet the requirements of 1-3 is outside the domain of semantics.

The origins of these standard tenets of formal semantics, though, are not some empirical discovery, or the results of some reasoned debate, but rather the declarations of a handful of influential logicians and philosophers. The ascendency of formal semantics, then, is due not to a revolution, but a coup. Since linguistic theory doesn’t get the same amount of press as economics and physics, the historical contours of the shift to formal semantics are at best murky. As such, I’ll explain my coup diagnosis through a series of personal anecdotes—not the ideal method, but the best I can do right now.

I was first exposed to formal semantics in my graduate coursework. The four numbered statements above were what I took for granted for a while. I was aware that there were other ways of looking at meaning, and that formal semantics was a relatively recent addition to the generative grammar family of theories, and I guess I assumed that the advent of formal semantics was an intellectual revolution and there must’ve been a great debate between the formalists and the non-formalists and the formalists came out on top. Of course, no one ever talked about that debate—I knew about the ongoing debates between behaviourists and generativists, and the “wars” between Generative Semantics and interpretive semantics, but no one told the tales of the Great Formal Semantics Debates. This should have been my first red flag—academics aren’t shy about their revolutionary arguments.

I first began to have qualms about formal semantics, when I heard Noam Chomsky’s lucid critiques of referentialism (tenet #1 above) in the Michel Gondry documentary Is The Man Who Is Tall Happy. Here was the man who founded Generative Syntax, who’s often considered a genius, and whose publications are usually major events in the field arguing that we’ve been doing semantics all wrong. As I better familiarized myself with his arguments, it became clear that he was holding a reasonable position. If I ever brought it up to a working semanticist, though, they would first brush it off saying basically “Chomsky needs to stay in his lane,” but when I put the arguments to them, they would acknowledge that they might be sound arguments, but that formal semantics was the only game in town (i.e., There is no alternative). One even told me straight out that, sure I could go against formal semantics, but if I did, I’d never get hired by any linguistics department (Of course, given the prevailing political and economic environment surrounding academic institutions, the odds of me getting hired regardless of my stance on formal semantics are pretty long anyway). This was when I first started to suspect something was amiss—the only defense that could be mustered for formal semantics was that everyone else was doing it and we can’t imagine an alternative.

I had to admit, though, that, despite my misgivings, I had no alternative to formal semantics and, being a syntactician, I didn’t really have the inclination to spend a lot of time coming up with one. As luck would have it, though, I happened upon exactly the sort of alternative that wasn’t supposed to exist: Jerrold Katz’ Semantic Theory. Published in 1972, the theory Katz proposed was explicitly non-referentialist, formal (in the sense of having a formalism), and opposed to what we now call formal linguistics. It was quite a surprise because I had heard of Katz—I read a paper he co-authored with Jerry Fodor for a syntax course—but strangely, he was always associated with the Generative Semantics crew—strangely, because he explicitly argues against them in his book. So, contrary to what I’d been told, there was an alternative, but why was I just finding out about it now? Unfortunately, Jerrold Katz died a few years before I ever picked up his book, as had his occasional co-author Jerry Fodor, so I couldn’t get their accounts of why his work had fallen out of favour. I asked the semanticists I knew about him and they recognized the name but had no idea about his work. The best explanation I got was from Chomsky, who said that he did good work, but semanticists were no longer interested in the questions he was asking. No stories of an LSA where Katz squared off against the new upstarts and was soundly beaten, no debates in the pages of Language or Linguistic Inquiry, Katz was just brushed aside and never spoken of again. Instead, the very fiats of philosophers and logicians (Carnap, Lewis, Quine, etc.) that Katz had argued against became the unexamined cornerstones of the field.

So, while the givenness of formal semantics was probably not the result of the schemes of a cabal of moneyed academics, like the Chicago School was, it doesn’t seem to have been the result of an open debate based on ideas and evidence, and it’s held in place, not by reason, but basically by sociopolitical forces. Thus I feel comfortable suggesting that it was the result of an intellectual coup.

Summing up: There’s always an alternative

I’ve offered a few potential features of an intellectual coup here, but nothing like an exhaustive diagnostic checklist. One important feature, though, is the “there is no alternative” attitude that they seem to foster. Any progress that we’ve made as a species, be it political, social, intellectual, or otherwise, stems from our ability to imagine a different way of doing things. So, for an intellectual community to be open to progress, it has to accept that there other ways of thinking about the world. Some of those alternatives are worse, some are better, but the only sure-fire way not to make progress is to declare that there is no alternative.

Internal unity in science again

Or, how to criticize a scientific theory

Recently, I discovered a book called The Primacy of Grammar by philosopher Nirmalangshu Mukherji. The book is basically an extended, and in my opinion quite good, apologia for biolinguistics as a science. The book is very readable and covers a decent amount of ground, including an entire chapter discussing the viability of incorporating a faculty of music into biolinguistic theory. I highly recommend it.

At one point, while defending biolinguistics from the charge of incompleteness levied by semanticists and philosophers, Mukherji makes the following point.

[D]uring the development of a science, a point comes when our pretheoretical expectations that led to the science in the first place have changed enough, and have been accommodated enough in the science for the science to define its objects in a theory-internal fashion. At this point, the science—viewed as a body of doctrines—becomes complete in carving out some specific aspect of nature. From that point on, only radical changes in the body of theory itself—not pressures from common sense—force further shifting of domains (Mukherji 2001). In the case of grammatical theory, either that point has not been reached or … the point has been reached but not yet recognized.

Mukherji (2010, 122-3)

There are two interesting claims that Mukherji is making about linguistic theory and scientific theory in general. One is that theoretical objects are solely governed by theory-internal considerations. The other is that the theory itself determines what in the external world it applies to.

The first claim reminded me of a meeting I had with my doctoral supervisor while I was writing my thesis. My theoretical explanation rested on the hypothesis that even the simplest of non-function words, like coffee, were decomposable into root objects (√COFFEE) and categorizing heads (n0). I had a dilemma though. It was crucial to my argument that, while categorizing heads had discrete features, roots were treated as featureless blobs by the grammar, but I couldn’t figure out how to justify such a claim. When I expressed my concern to my supervisor, she immediately put my worries to rest. I didn’t need to justify that claim, she pointed out, because roots by their definition have no features.

I had fallen into a very common trap in syntax—I had treated a theory-internal object as an empirical object. Empirical objects can be observed and sensibly argued about. Take, for instance, English specificational clauses (e.g. The winner is Mary). Linguists can and do argue about the nature of these—i.e. whether or they are truly the inverse of predicational clauses (e.g., Mary is the winner)— and cite facts the do so. This is because empirical objects and phenomena are out there in the real world, regardless of whether we study them. Theory-internal objects, on the other hand are not subject to fact-based argument, because, unless the Platonists are right, they have no objective reality. As long as my theory is internally consistent, I can define its objects however I damn please. The true test of any theory is how well it can be mapped onto some aspect of reality.

This brings me to Mukherji’s second assertion, that the empirical domain to a theory is determined by the theory itself. In the context of his book, this assertion is about linguistic meaning. The pretheoretic notion of meaning is what he calls a “thick” notion—a multifaceted concept that is very difficult to pin down. The development of a biolinguistic theory of grammar, though, has led to a thinner notion of meaning, namely, the LF of a given expression. Now obviously, this notion of meaning doesn’t include notions of reference, truth, or felicity, but why should we expect it to? Yes, those notions belong to our common-sense ideas of meaning, but surely at this stage of human history, we should expect that scientific inquiry will reveal our common-sense notions to be flawed.

As an analogy, Aristotle and his contemporaries didn’t distinguish between physics, biology, chemistry, geology, an so on—they were all part of physics. One of the innovations of the scientific revolutions, then, was to narrow the scope of investigation—to develop theories of a sliver of nature. If Aristotle saw our modern physics departments, he might look past all of their fantastic theoretical advances and wonder instead why no one in the department was studying plants and animals. Most critiques of internalist/biolinguistic notions of semantics by modern philosophers and formal semanticists echo this hypothetical time-travelling Aristotle—they brush off any advances and wonder where the theory of truth is.

Taken together, these assertions imply a general principle: Scientific theories should be assessed on their own terms. Criticizing grammatical theory for its lack of a theory of reference makes as much sense as criticizing Special Relativity for its lack of a theory of genetic inheritance. While this may seem to render any theory beyond criticism, the history of science demonstrates that this isn’t the case. Consider, for instance, quantum mechanics, which has been subject to a number of criticisms in its own terms—see: Einstein’s criticisms of QM, Schrödinger’s cat, and the measurement problem. In some cases these criticisms are insurmountable, but in others addressing them head-on and modifying or clarifying the theory is what leads to advances in the theory. Chomsky’s Label Theory, I think, is one of the latter sorts of cases—a theory-internal problem was identified and addressed and as a result two unexplained phenomena (the EPP and the ECP) were given a theoretical explanation. We can debate how well that explanation generalizes and whether it leans too heavily on some auxiliary hypotheses, but what’s important is that a theory-internal addressing of a theory-internal problem opened up the possibility of such an explanation. This may seem wildly counter-intuitive, but as I argued in a previous post, this is the only practical way to do science.

The principle that a theory should be criticized in its own terms is, I think, what irks the majority of linguists about biolinguistic grammatical theory the most. It bothers them because it means that very few of their objections to the theory ever really stick. Ergativity, for instance, is often touted as a serious problem for Abstract Case Theory, but since grammatical theory has nothing to say about particular case alignments, theorists can just say “Yeah, that’s interesting” and move on. Or to take a more extreme case, recent years have seen all out assaults on grammatical theory from people who bizarrely call themselves “cognitive linguists”, people like Vyvyan Evans and Daniel Everett, they claim to have evidence that roundly refutes the very notion of a language faculty. The response of biolinguists to this assault: mostly a resounding shrug as we turn back to our work.

So, critics of biolinguistic grammatical theory dismiss it in a number of way. They say it’s too vague or slippery to be any good as a theory, which usually means they refuse to seriously engage with it, they complain that the theory keeps changing—a peculiar complaint to lodge against a scientific theory, or they accuse theorists of arrogance—a charge that, despite being occasionally true, is not a criticism of the theory. This kind of hostility can be bewildering, especially because a corollary of the idea that a theory defines its own domain is that everything outside that domain is a free-for-all. It’s hard to imagine a geneticist being upset that their data is irrelevant to Special Relativity. I have some ideas about where the hostility comes from but they’ll take me pretty far afield, so I’ll save them for a later post and leave it here.

Instrumentalism in Linguistics

(Note: Unlike my previous posts, this one is not aimed at a general audience. this one’s for linguists)

As a generative linguist, I like to think of myself as a scientist. Certainly, my field is not as mature and developed as physics, chemistry, and biology, but my fellow linguists and I approach language and its relation to human psychology scientifically. This is crucial to our identity. Sure our universities consider linguistics a member of the humanities, and we often share departments with literary theorists, but we’re scientists!

Because it’s so central to our identity, we’re horribly insecure about our status as scientists. As a result of our desire to be seen as a scientific field, we’ve adopted a particular philosophy of science without even realizing it: Instrumentalism.

But, what is instrumentalism? It’s the belief that the sole, or at least primary, purpose of a scientific theory is its ability to generate and predict the outcome of empirical tests. So, one theory is preferable to another if and only if the former better predicts the data than the latter. A theory’s simplicity, intelligibility, or consistency is at best a secondary consideration. Two theories that have the same empirical value can then be compared according to these standards. Generative linguistics seems to have adopted this philosophy, to its detriment.

What’s wrong with instrumentalism? Nothing per se. It definitely has its place in science. It’s perfectly reasonable for a chemist in a lab to view quantum mechanics as an experiment-generating machine. In fact, it might be an impediment to their work to worry about how intelligible QM is. They would be happy to leave that kind of thinking to the theorists and philosophers while they, the experimenter, used the sanitized mathematical expressions of QM to design and carry out their work.

“Linguistics is a science,” the linguist thinks to themself. “ So, linguists ought to behave like scientists.” Then with a glance at the experimental chemist, the linguist adopts instrumentalism. But, there’s a fallacy in that line of thinking: Instrumentalism being an appropriate attitude for some people in a mature science, like chemistry, does not mean it should be the default attitude for people in a nascent science, like linguistics. In fact, there are good reasons for instrumentalism to be only a marginally acceptable attitude in linguistics. Rather, we should judge our theories on the more humanistic measures of intelligibility, simplicity, and self-consistency in addition to consistency with experience.

What’s wrong with instrumentalism in linguistics?

So why can’t linguists be like the chemist in the lab? Why can’t we read the theory, develop the tests of the theory, and run them? There are a number of reasons. First, as some philosophers of science have argued, It is never the case that a theoretical statement is put to the test by an empirical statement, but rather the former is tested by the latter in light of a suite of background assumptions. So, chemists can count the number of molecules in a sample of gas if they know its pressure, volume, and temperature. How do they know, say, the temperature of the gas sample? They use a thermometer, of course, an instrument they trust by virtue of their background assumptions regarding the how matter, in general, and mercury, in particular, are affected by temperature changes. Lucky for chemists, those assumptions have centuries worth of testing and thinking behind them. No such luck for generative linguists, we’ve only got a few decades of testing and thinking behind our assumptions, which is reflected by how few empirical tools we have and how unreliable they are. Our tests for syntactic constituency are pretty good in a few cases — good enough to provide evidence that syntax traffics in constituency — but they give way too many false positives and negatives. Their unreliability means real syntactic work must develop diagnostics which are more intricate and which carry much more theoretical baggage. If a theory is merely a hypothesis-machine, and the tools for testing those hypotheses depend on the theory, how can we avoid rigging the game in our favour?

Suppose we have two theories, T1 and T2, which are sets of statements regarding an empirical domain D. T1 has been rigorously vetted and found to be internally consistent, simple, and intelligible, and predicts 80% of the facts in D. T2 is rife with inconsistencies, hidden complexities, and opaque concepts, but covers 90% of the facts in D. Which is the better theory? Instrumentalism would suggest T2 is the superior theory due to its empirical coverage. Non-dogmatic people might disagree, but I suspect would all be uncomfortable with instrumentalism as the sole arbiter in this case.

The second problem, which exacerbates the first, is that there’s too much data, and it’s too easy to get even more. This has resulted in subdisciplines being further divided into several niches each devoted to a particular phenomenon or group of languages. Such a narrowing of the empirical domain, coupled with an instrumentalist view of theorizing, has frequently led to the development of competing theories of that domain, theories which are largely impenetrable to those conversant with the general theory but uninitiated with the niche in question. This is a different situation from the one described above. In this situation T1 and T2 might each cover 60% of a subdomain D’, but those 60% are overlapping. Each has a core set of facts that the other cannot, as yet, touch, so the two sides take turns claiming parts of the overlap as their sole territory, and no progress is made.

Often it’s the case that one of the competing specific theories is inconsistent with the general theory, but proponents of the other theory don’t use that fact in their arguments. In their estimation the data always trumps theory, regardless of how inherently theory-laden the description of the data is. It’s as if two factions were fighting each other with swords despite the fact that one side had a cache of rifles and ammunition that they decided not to use.

The third problem, one that has been noted by other theory-minded linguists here and here, is that the line between theoretical and empirical linguistics is blurry. To put it a bit more strongly, what is called “theoretical linguistics” is often empirical linguistics masquerading as theoretical. This assertion becomes clear when we look at the usual structure of a “theoretical syntax” paper in the abstract. First, a grammatical phenomenon is identified and demonstrate. After some discussion of previous work, the author demonstrates the results of some diagnostics and from those results gives a formal analysis of the phenomenon. If we translated this into the language of a mature science it would be indistinguishable from an experimental report. A phenomenon is identified and discussed, the results of some empirical techniques are reported, and an analysis is given.

You might ask “So what? Who cares what empirical syntacticians call themselves?” Well, if you’re a “theoretical syntactician,” then you might propose a modification of syntactic theory to make your empirical analysis work, and other “theoretical syntacticians” will accept those modifications and propose some modifications of their own. It doesn’t take too long in this cycle before the standard theory is rife with inconsistencies, hidden complexities, and opaque concepts. None of that matters, however, if your goal is just to cover the data.

Or, to take another common “theoretical” move, suppose we find an empirical generalization, G (e.g., All languages that allow X also allow Y), the difficult task of the theoretician is to show that G follows from independently motivated theoretical principles. The “theoretician,” on the other hand, has another path available, which is to restate G in “theoretical” terms (e.g., Functional head, H, is responsible for both X and Y), and then (maybe) go looking for some corroboration. Never mind that restating G in different terms does nothing to expand our understanding of why G holds, but understanding is always secondary for instrumentalism.

So, what’s to be done?

Reading this, you might think I don’t value empirical work in linguistics, which is simply not the case. Quite frankly, I am constantly in awe of linguists who can take a horrible mess of data and make even a modicum sense out of it. Empirical work has value, but linguistics has somehow managed to both over- and under-value it. We over-value it by tacitly embracing instrumentalism as our guiding philosophy. We under-value it by giving the title “theoretical linguist” a certain level of prestige. We think empirical work is easier and less-than. This has led us to under-value theoretical work, and view theoretical arguments as just gravy when they’re in our favour, and irrelevancies when they’re against us.

What we should strive for, is an appropriate balance between empirical and theoretical work. To get to that balance we must do the unthinkable and look to the humanities. To develop as a science, we ought to look at mature sciences, not as they are now, but as they developed. Put another way, we need to think historically. If we truly want our theory to explain the human language faculty, we need to accept that we will be explaining it to humans and designing a theory that another human can understand requires us to embrace our non-rational qualities like intuition and imagination.

In sum, we could all use a little humility. Maybe we’ll reach a point when instrumentalism will work for empirical linguistics, but we’re not there yet, and pretending we are won’t make it so.

Tarring Universal Grammar with the Brexit brush

Over at Psychology Today, Vyv Evans, cognitive linguist and UG critic, has written a piece criticizing generative linguistics, and those who defend its practice. In particular he criticizes what he sees as the shape-shifting nature of UG.

I don’t want to address the substance of Evans’ piece, but rather a rhetorical choice he makes, specifically, his choice to compare UG to Brexit. (There was at least one other bit of rhetoric that bothered me, but I’ll save that for a later post.) A choice that serves no other purpose than to give the reader a negative emotional impression of UG. “Brexit is bad. UG is like Brexit in some way. Therefore UG is bad.”

So, what do UG and Brexit have in common? Well, Evans begins by discussing the fact that members of the UK government can’t seem to reach a consensus as to what Brexit means. In Evans’ words:

…[T]here are now, perhaps, as many versions of Brexit as there are governmental ministers; moreover, each minister’s version of Brexit seems to change each time they are interviewed on the subject. Brexit is a shape-shifting chimera, as the UK government attempts to square the impossible circle of respecting the referendum result, and democracy, while attempting to avoid destroying the UK’s economy.

UG, in Evans’ estimation is the same. Ask two generative grammarians what Universal Grammar means and you’ll get two distinct answers. Ask them again in a year, you’ll get two more answers. This leads Evans to the question of why generative grammarians can’t just settle on a definition of UG, to which he offers the answer: Because it isn’t real.

Seems pretty damning, but let’s push on it just a bit. If the “shape-shifting” nature of generative linguistics follows from the fact that UG isn’t real, does that mean that Brexit also isn’t real? Surely, Evans doesn’t believe that Brexit is merely a figment of the UK government’s collective imagination. I don’t think he does, but I do think he knows that most of his readers wish Brexit were just a dream they could awake from. And maybe now they want UG theory to be false.

And if “shape-shifting” is a sign that UG is false, why bother with the talk of Brexit? Why not write an article surveying the many ways UG theory has changed and been reinterpreted since the 50’s, and how current generative grammarians still debate each other on what UG is and isn’t? Perhaps because that wouldn’t make it as easy for the reader to conclude that UG theory is patently false.

Just to drive my point home, let’s consider an article Evans could have written. One with the same logical structure, but vastly different emotional structure.

What do Quantum Mechanics and Universal Grammar have in common?
Ask a layperson about quantum mechanics and they may tell you something about Schrödinger’s cat being both alive and dead until we observe it, but ask a physicist or a philosopher of physics, and who knows what they’ll say. They may talk about superposition or the randomness of the universe. They might talk about guiding waves. They might even talk about multiple realities. Or maybe they’ll just show you some equations and go back to their experiments. In fact if you ask any number of physicists what quantum mechanics is you’ll get the same number of responses which differ from each other to varying degrees. And if you look at Universal Grammar theory, you’ll find a similar situation. Probably because UG simply isn’t real.

Such an article would never be published by Psychology Today, and any academic who wrote it would be laughed out of academia, and scolded for suggesting that quantum mechanics might be a lie.

And to be perfectly clear, anyone who used such a comparison to quantum mechanics to assert Universal Grammar’s validity would be equally as wrong, because (and I assume that Evans knows this, because I assume him to be neither stupid, nor irrational.) just because two things share one property doesn’t mean they share any other properties.

Quite frankly, I’m disappointed by Evans’ article. I’m disappointed that he’s resorted to this kind of emotional appeal rather than well-reasoned criticism against UG. Academics, like artists and politicians, need good criticism to keep them honest, because theories are made stronger not only by adding good arguments in their favour, but also by discarding bad arguments in their favour.

Don’t believe the rumours. Universal Grammar is alive and well.

As I write this I am sitting in the Linguistics Department lounge at the University of Toronto. Grad students and Post-doctoral researchers are working, chatting, making coffee. Faculty members pop in every now and then, taking breaks from their work.

It’s a vibrant department, full of researchers with varied skills and interests. There are those who just got back into town from their summer fieldwork, excited to dig into the new language data from indigenous Canadian, Amazonian, or Australian languages. There are those struggling to find a way to explain the behaviour of some set of prefixes or verbs in Turkish, or Spanish, or Niuean. There are those designing and running experiments to test what young children know about the languages they are acquiring. There are those sifting through massive databases of speech from rural farmers, or lyrics of local hip-hop artists, or the emails of Enron employees, to hopefully gain some understanding of how English varies and changes. And there are those who spend their time thinking about our theories of language and how they might be integrated with theories of Psychology, Neurology, and Biology. What unites these disparate research agendas is that they are all grounded in the hypothesis, generally attributed to Noam Chomsky, that the human mind contains innate structures, a Universal Grammar, that allows us to acquire, comprehend, and use language.

According to a recent article in Scientific American, however, the community I just described doesn’t exist, and maybe couldn’t possibly exist in linguistics today, because the kind of work that I just described has long since shown the Universal Grammar hypothesis (UG) to be flat-out wrong. But such a community does exist, and not just here at UofT, or in Chomsky’s own department at MIT, but in Berkeley and Manhattan, in Newfoundland and Vancouver, in Norway and Tokyo. Communities that collectively groan whenever someone sounds the death knell of the UG hypothesis or the enterprise of Generative Linguistics it spawned. We groan, not because we’ve been exposed for the frauds or fools that these pieces say we are, but because we are always misrepresented in them. Sometimes the misrepresentation is laughable, but more often it’s damn frustrating.

Articles like the one in SA are frustrating because they are usually wrong, both about the history and culture of UG and about the idea itself.

The picture of UG’s history, as painted by the SA article is one of Chomsky coming up with a nice theory of language based on a handful of English sentences and passing that theory down to the academics who accept it without question or debate. The academics, then, play around with his theory and make grander and more absurd claims, all without leaving the safety of their ivory towers. Meanwhile the real linguists are out in the world collecting data from languages all over the globe, data which is devastating to UG and therefore stifled by Chomsky and his acolytes, the generative linguists.

This picture is false on two counts. First, Chomsky and UG have always faced strong and vocal opposition. The philosophers and linguists who were Chomsky’s contemporaries in the 50’s and 60’s objected to the very notion that language could be used to scientifically investigate the mind. Many of his earliest students developed competing theories of grammar which are still worked on today. And that’s not even counting the linguists who largely agree with Chomsky’s picture of language but disagree with some of the technical details he proposes.

Second, far from being hostile to language data gathered by fieldwork (or experimental work, or corpus work) generative linguists often seek out new language data to test predictions made by their theories. Some generative linguists, such as Keren Rice and the late Ken Hale, are known not only for their contributions to generative linguistics but also for their commitment to fieldwork and advocacy on behalf of the marginalized communities in which they do/did their fieldwork. And the interest in a wide variety of languages extends beyond fieldworkers. A glance at the program for virtually any generative linguistics conference will demonstrate that. Because UG is such a solid theory, throwing language data at it doesn’t kill it, but makes it better.

Which brings us to the mistaken view of the actual theory of UG that article such as the SA piece present. Daniel Everett, a field linguist and former evangelical missionary, gained prominence in 2005 when he claimed that Pirahã, a language spoken by a remote Amazonian tribe showed properties that categorically refuted the UG hypothesis. Central to UG theory, in Everett’s estimation, is embedding, the ability of a language to place, for example, a clause inside another clause (as in “I heard that Maura laughed”). Pirahã, it seemed, was unable to embed.

On the surface, this does seem like a knockout punch to UG, but there’s just one problem with it: Everett is mistaken about what UG is.

The source of his confusion seems to be the term recursion, which is the central concept of modern Generative linguistics, not embedding. This might be confusing. I confess I don’t know that I understood the distinction until I was well into grad school. When we think of recursion, what comes to most people’s minds is the Droste effect, the effect of a picture appearing within itself.

The Droste effect is an example of embedding, and it is a demonstration of a possible effect of recursion, but it is not the definition of recursion. The term recursion, when used by Generative linguists, refers to a property of functions. Functions are relations between a set of inputs and a set of outputs. A function is recursive if its output can also serve as its input. The +1 function is recursive when applied to integers (1+1=2, 2+1=3, etc.). A function that converts numbers into their Roman numeral representation is not recursive (7 → VII, VII → ??). For generative linguistics the recursive function is Merge, which combines two words or phrases to form a larger structure which can then be the input for further iterations of Merge. Any expression larger than two words, then, requires recursion, regardless of whether there is embedding in that expression. For instance the noun phrase “My favourite book” requires two iterations of Merge, (Merge(favourite, book)= [Favourite book], Merge(my, [favourite book])= [my [favourite book]]) and therefore is an instance of recursion without embedding.

The confusion between recursion and embedding, is due to the fact that only recursive functions are capable of generating embedded structures, not all recursive functions generate embedded structures. The relationship between recursion and embedding, is like the relationship between precipitation and snow. If we see snow on the ground, we know there’s been precipitation, but if we don’t see snow, that doesn’t mean there has been no precipitation. So, for Everett to say that Pirahã’s alleged lack of embedding means it lacks recursion is equivalent to him saying the Amazonian climate lacks precipitation because you never find snow on the ground there.

I’ll admit this is a subtle distinction. The average person, even the average linguist, doesn’t need to worry about the distinction. But here’s what makes Everett’s claims and that SA article (written by two academics, Paul Ibbotson and Michael Tomasello) so frustrating: They should know better. Not because their day-to-day research projects require them to know better, but because when you claim that someone said something incorrect, you had better know what it was they said. What’s more, when the recursion/embedding distinction is mistakenly blurred by those who reject the UG hypothesis, Generative linguists are quick to correct the mistake.

So, being trained academic researchers, Everett, Ibbotson and Tomasello should have researched the claims they intended to rebut. But even if they missed or misinterpreted something in Chomsky’s writing, a number of Generative linguists have already clarified their mistake. Why, then, do they persist in misrepresenting the claims they are trying to rebut?

I’m certain there are more irritating articles and books proclaiming the death of UG coming, but in the meantime, in vibrant communities like the UofT Linguistics Department, Professors, Post-docs and Grad students will continue to investigate UG, with every tool at our disposal. We’ll gather data from remote villages and diaspora communities in our cities. We’ll run statistical analyses on corpora and develop formal models. We’ll present our findings and debate proposals. And in doing so, we hope to continually better our understanding the deep properties of our species’ unique ability to acquire, comprehend and use language.

Edit: I’ve removed the assertion that that mathematicians and computer scientists share the definition of recursion given here.