Instrumentalism in Linguistics

(Note: Unlike my previous posts, this one is not aimed at a general audience. this one’s for linguists)

As a generative linguist, I like to think of myself as a scientist. Certainly, my field is not as mature and developed as physics, chemistry, and biology, but my fellow linguists and I approach language and its relation to human psychology scientifically. This is crucial to our identity. Sure our universities consider linguistics a member of the humanities, and we often share departments with literary theorists, but we’re scientists!

Because it’s so central to our identity, we’re horribly insecure about our status as scientists. As a result of our desire to be seen as a scientific field, we’ve adopted a particular philosophy of science without even realizing it: Instrumentalism.

But, what is instrumentalism? It’s the belief that the sole, or at least primary, purpose of a scientific theory is its ability to generate and predict the outcome of empirical tests. So, one theory is preferable to another if and only if the former better predicts the data than the latter. A theory’s simplicity, intelligibility, or consistency is at best a secondary consideration. Two theories that have the same empirical value can then be compared according to these standards. Generative linguistics seems to have adopted this philosophy, to its detriment.

What’s wrong with instrumentalism? Nothing per se. It definitely has its place in science. It’s perfectly reasonable for a chemist in a lab to view quantum mechanics as an experiment-generating machine. In fact, it might be an impediment to their work to worry about how intelligible QM is. They would be happy to leave that kind of thinking to the theorists and philosophers while they, the experimenter, used the sanitized mathematical expressions of QM to design and carry out their work.

“Linguistics is a science,” the linguist thinks to themself. “ So, linguists ought to behave like scientists.” Then with a glance at the experimental chemist, the linguist adopts instrumentalism. But, there’s a fallacy in that line of thinking: Instrumentalism being an appropriate attitude for some people in a mature science, like chemistry, does not mean it should be the default attitude for people in a nascent science, like linguistics. In fact, there are good reasons for instrumentalism to be only a marginally acceptable attitude in linguistics. Rather, we should judge our theories on the more humanistic measures of intelligibility, simplicity, and self-consistency in addition to consistency with experience.

What’s wrong with instrumentalism in linguistics?

So why can’t linguists be like the chemist in the lab? Why can’t we read the theory, develop the tests of the theory, and run them? There are a number of reasons. First, as some philosophers of science have argued, It is never the case that a theoretical statement is put to the test by an empirical statement, but rather the former is tested by the latter in light of a suite of background assumptions. So, chemists can count the number of molecules in a sample of gas if they know its pressure, volume, and temperature. How do they know, say, the temperature of the gas sample? They use a thermometer, of course, an instrument they trust by virtue of their background assumptions regarding the how matter, in general, and mercury, in particular, are affected by temperature changes. Lucky for chemists, those assumptions have centuries worth of testing and thinking behind them. No such luck for generative linguists, we’ve only got a few decades of testing and thinking behind our assumptions, which is reflected by how few empirical tools we have and how unreliable they are. Our tests for syntactic constituency are pretty good in a few cases — good enough to provide evidence that syntax traffics in constituency — but they give way too many false positives and negatives. Their unreliability means real syntactic work must develop diagnostics which are more intricate and which carry much more theoretical baggage. If a theory is merely a hypothesis-machine, and the tools for testing those hypotheses depend on the theory, how can we avoid rigging the game in our favour?

Suppose we have two theories, T1 and T2, which are sets of statements regarding an empirical domain D. T1 has been rigorously vetted and found to be internally consistent, simple, and intelligible, and predicts 80% of the facts in D. T2 is rife with inconsistencies, hidden complexities, and opaque concepts, but covers 90% of the facts in D. Which is the better theory? Instrumentalism would suggest T2 is the superior theory due to its empirical coverage. Non-dogmatic people might disagree, but I suspect would all be uncomfortable with instrumentalism as the sole arbiter in this case.

The second problem, which exacerbates the first, is that there’s too much data, and it’s too easy to get even more. This has resulted in subdisciplines being further divided into several niches each devoted to a particular phenomenon or group of languages. Such a narrowing of the empirical domain, coupled with an instrumentalist view of theorizing, has frequently led to the development of competing theories of that domain, theories which are largely impenetrable to those conversant with the general theory but uninitiated with the niche in question. This is a different situation from the one described above. In this situation T1 and T2 might each cover 60% of a subdomain D’, but those 60% are overlapping. Each has a core set of facts that the other cannot, as yet, touch, so the two sides take turns claiming parts of the overlap as their sole territory, and no progress is made.

Often it’s the case that one of the competing specific theories is inconsistent with the general theory, but proponents of the other theory don’t use that fact in their arguments. In their estimation the data always trumps theory, regardless of how inherently theory-laden the description of the data is. It’s as if two factions were fighting each other with swords despite the fact that one side had a cache of rifles and ammunition that they decided not to use.

The third problem, one that has been noted by other theory-minded linguists here and here, is that the line between theoretical and empirical linguistics is blurry. To put it a bit more strongly, what is called “theoretical linguistics” is often empirical linguistics masquerading as theoretical. This assertion becomes clear when we look at the usual structure of a “theoretical syntax” paper in the abstract. First, a grammatical phenomenon is identified and demonstrate. After some discussion of previous work, the author demonstrates the results of some diagnostics and from those results gives a formal analysis of the phenomenon. If we translated this into the language of a mature science it would be indistinguishable from an experimental report. A phenomenon is identified and discussed, the results of some empirical techniques are reported, and an analysis is given.

You might ask “So what? Who cares what empirical syntacticians call themselves?” Well, if you’re a “theoretical syntactician,” then you might propose a modification of syntactic theory to make your empirical analysis work, and other “theoretical syntacticians” will accept those modifications and propose some modifications of their own. It doesn’t take too long in this cycle before the standard theory is rife with inconsistencies, hidden complexities, and opaque concepts. None of that matters, however, if your goal is just to cover the data.

Or, to take another common “theoretical” move, suppose we find an empirical generalization, G (e.g., All languages that allow X also allow Y), the difficult task of the theoretician is to show that G follows from independently motivated theoretical principles. The “theoretician,” on the other hand, has another path available, which is to restate G in “theoretical” terms (e.g., Functional head, H, is responsible for both X and Y), and then (maybe) go looking for some corroboration. Never mind that restating G in different terms does nothing to expand our understanding of why G holds, but understanding is always secondary for instrumentalism.

So, what’s to be done?

Reading this, you might think I don’t value empirical work in linguistics, which is simply not the case. Quite frankly, I am constantly in awe of linguists who can take a horrible mess of data and make even a modicum sense out of it. Empirical work has value, but linguistics has somehow managed to both over- and under-value it. We over-value it by tacitly embracing instrumentalism as our guiding philosophy. We under-value it by giving the title “theoretical linguist” a certain level of prestige. We think empirical work is easier and less-than. This has led us to under-value theoretical work, and view theoretical arguments as just gravy when they’re in our favour, and irrelevancies when they’re against us.

What we should strive for, is an appropriate balance between empirical and theoretical work. To get to that balance we must do the unthinkable and look to the humanities. To develop as a science, we ought to look at mature sciences, not as they are now, but as they developed. Put another way, we need to think historically. If we truly want our theory to explain the human language faculty, we need to accept that we will be explaining it to humans and designing a theory that another human can understand requires us to embrace our non-rational qualities like intuition and imagination.

In sum, we could all use a little humility. Maybe we’ll reach a point when instrumentalism will work for empirical linguistics, but we’re not there yet, and pretending we are won’t make it so.

Tarring Universal Grammar with the Brexit brush

Over at Psychology Today, Vyv Evans, cognitive linguist and UG critic, has written a piece criticizing generative linguistics, and those who defend its practice. In particular he criticizes what he sees as the shape-shifting nature of UG.

I don’t want to address the substance of Evans’ piece, but rather a rhetorical choice he makes, specifically, his choice to compare UG to Brexit. (There was at least one other bit of rhetoric that bothered me, but I’ll save that for a later post.) A choice that serves no other purpose than to give the reader a negative emotional impression of UG. “Brexit is bad. UG is like Brexit in some way. Therefore UG is bad.”

So, what do UG and Brexit have in common? Well, Evans begins by discussing the fact that members of the UK government can’t seem to reach a consensus as to what Brexit means. In Evans’ words:

…[T]here are now, perhaps, as many versions of Brexit as there are governmental ministers; moreover, each minister’s version of Brexit seems to change each time they are interviewed on the subject. Brexit is a shape-shifting chimera, as the UK government attempts to square the impossible circle of respecting the referendum result, and democracy, while attempting to avoid destroying the UK’s economy.

UG, in Evans’ estimation is the same. Ask two generative grammarians what Universal Grammar means and you’ll get two distinct answers. Ask them again in a year, you’ll get two more answers. This leads Evans to the question of why generative grammarians can’t just settle on a definition of UG, to which he offers the answer: Because it isn’t real.

Seems pretty damning, but let’s push on it just a bit. If the “shape-shifting” nature of generative linguistics follows from the fact that UG isn’t real, does that mean that Brexit also isn’t real? Surely, Evans doesn’t believe that Brexit is merely a figment of the UK government’s collective imagination. I don’t think he does, but I do think he knows that most of his readers wish Brexit were just a dream they could awake from. And maybe now they want UG theory to be false.

And if “shape-shifting” is a sign that UG is false, why bother with the talk of Brexit? Why not write an article surveying the many ways UG theory has changed and been reinterpreted since the 50’s, and how current generative grammarians still debate each other on what UG is and isn’t? Perhaps because that wouldn’t make it as easy for the reader to conclude that UG theory is patently false.

Just to drive my point home, let’s consider an article Evans could have written. One with the same logical structure, but vastly different emotional structure.

What do Quantum Mechanics and Universal Grammar have in common?
Ask a layperson about quantum mechanics and they may tell you something about Schrödinger’s cat being both alive and dead until we observe it, but ask a physicist or a philosopher of physics, and who knows what they’ll say. They may talk about superposition or the randomness of the universe. They might talk about guiding waves. They might even talk about multiple realities. Or maybe they’ll just show you some equations and go back to their experiments. In fact if you ask any number of physicists what quantum mechanics is you’ll get the same number of responses which differ from each other to varying degrees. And if you look at Universal Grammar theory, you’ll find a similar situation. Probably because UG simply isn’t real.

Such an article would never be published by Psychology Today, and any academic who wrote it would be laughed out of academia, and scolded for suggesting that quantum mechanics might be a lie.

And to be perfectly clear, anyone who used such a comparison to quantum mechanics to assert Universal Grammar’s validity would be equally as wrong, because (and I assume that Evans knows this, because I assume him to be neither stupid, nor irrational.) just because two things share one property doesn’t mean they share any other properties.

Quite frankly, I’m disappointed by Evans’ article. I’m disappointed that he’s resorted to this kind of emotional appeal rather than well-reasoned criticism against UG. Academics, like artists and politicians, need good criticism to keep them honest, because theories are made stronger not only by adding good arguments in their favour, but also by discarding bad arguments in their favour.

Don’t believe the rumours. Universal Grammar is alive and well.

As I write this I am sitting in the Linguistics Department lounge at the University of Toronto. Grad students and Post-doctoral researchers are working, chatting, making coffee. Faculty members pop in every now and then, taking breaks from their work.

It’s a vibrant department, full of researchers with varied skills and interests. There are those who just got back into town from their summer fieldwork, excited to dig into the new language data from indigenous Canadian, Amazonian, or Australian languages. There are those struggling to find a way to explain the behaviour of some set of prefixes or verbs in Turkish, or Spanish, or Niuean. There are those designing and running experiments to test what young children know about the languages they are acquiring. There are those sifting through massive databases of speech from rural farmers, or lyrics of local hip-hop artists, or the emails of Enron employees, to hopefully gain some understanding of how English varies and changes. And there are those who spend their time thinking about our theories of language and how they might be integrated with theories of Psychology, Neurology, and Biology. What unites these disparate research agendas is that they are all grounded in the hypothesis, generally attributed to Noam Chomsky, that the human mind contains innate structures, a Universal Grammar, that allows us to acquire, comprehend, and use language.

According to a recent article in Scientific American, however, the community I just described doesn’t exist, and maybe couldn’t possibly exist in linguistics today, because the kind of work that I just described has long since shown the Universal Grammar hypothesis (UG) to be flat-out wrong. But such a community does exist, and not just here at UofT, or in Chomsky’s own department at MIT, but in Berkeley and Manhattan, in Newfoundland and Vancouver, in Norway and Tokyo. Communities that collectively groan whenever someone sounds the death knell of the UG hypothesis or the enterprise of Generative Linguistics it spawned. We groan, not because we’ve been exposed for the frauds or fools that these pieces say we are, but because we are always misrepresented in them. Sometimes the misrepresentation is laughable, but more often it’s damn frustrating.

Articles like the one in SA are frustrating because they are usually wrong, both about the history and culture of UG and about the idea itself.

The picture of UG’s history, as painted by the SA article is one of Chomsky coming up with a nice theory of language based on a handful of English sentences and passing that theory down to the academics who accept it without question or debate. The academics, then, play around with his theory and make grander and more absurd claims, all without leaving the safety of their ivory towers. Meanwhile the real linguists are out in the world collecting data from languages all over the globe, data which is devastating to UG and therefore stifled by Chomsky and his acolytes, the generative linguists.

This picture is false on two counts. First, Chomsky and UG have always faced strong and vocal opposition. The philosophers and linguists who were Chomsky’s contemporaries in the 50’s and 60’s objected to the very notion that language could be used to scientifically investigate the mind. Many of his earliest students developed competing theories of grammar which are still worked on today. And that’s not even counting the linguists who largely agree with Chomsky’s picture of language but disagree with some of the technical details he proposes.

Second, far from being hostile to language data gathered by fieldwork (or experimental work, or corpus work) generative linguists often seek out new language data to test predictions made by their theories. Some generative linguists, such as Keren Rice and the late Ken Hale, are known not only for their contributions to generative linguistics but also for their commitment to fieldwork and advocacy on behalf of the marginalized communities in which they do/did their fieldwork. And the interest in a wide variety of languages extends beyond fieldworkers. A glance at the program for virtually any generative linguistics conference will demonstrate that. Because UG is such a solid theory, throwing language data at it doesn’t kill it, but makes it better.

Which brings us to the mistaken view of the actual theory of UG that article such as the SA piece present. Daniel Everett, a field linguist and former evangelical missionary, gained prominence in 2005 when he claimed that Pirahã, a language spoken by a remote Amazonian tribe showed properties that categorically refuted the UG hypothesis. Central to UG theory, in Everett’s estimation, is embedding, the ability of a language to place, for example, a clause inside another clause (as in “I heard that Maura laughed”). Pirahã, it seemed, was unable to embed.

On the surface, this does seem like a knockout punch to UG, but there’s just one problem with it: Everett is mistaken about what UG is.

The source of his confusion seems to be the term recursion, which is the central concept of modern Generative linguistics, not embedding. This might be confusing. I confess I don’t know that I understood the distinction until I was well into grad school. When we think of recursion, what comes to most people’s minds is the Droste effect, the effect of a picture appearing within itself.

The Droste effect is an example of embedding, and it is a demonstration of a possible effect of recursion, but it is not the definition of recursion. The term recursion, when used by Generative linguists, refers to a property of functions. Functions are relations between a set of inputs and a set of outputs. A function is recursive if its output can also serve as its input. The +1 function is recursive when applied to integers (1+1=2, 2+1=3, etc.). A function that converts numbers into their Roman numeral representation is not recursive (7 → VII, VII → ??). For generative linguistics the recursive function is Merge, which combines two words or phrases to form a larger structure which can then be the input for further iterations of Merge. Any expression larger than two words, then, requires recursion, regardless of whether there is embedding in that expression. For instance the noun phrase “My favourite book” requires two iterations of Merge, (Merge(favourite, book)= [Favourite book], Merge(my, [favourite book])= [my [favourite book]]) and therefore is an instance of recursion without embedding.

The confusion between recursion and embedding, is due to the fact that only recursive functions are capable of generating embedded structures, not all recursive functions generate embedded structures. The relationship between recursion and embedding, is like the relationship between precipitation and snow. If we see snow on the ground, we know there’s been precipitation, but if we don’t see snow, that doesn’t mean there has been no precipitation. So, for Everett to say that Pirahã’s alleged lack of embedding means it lacks recursion is equivalent to him saying the Amazonian climate lacks precipitation because you never find snow on the ground there.

I’ll admit this is a subtle distinction. The average person, even the average linguist, doesn’t need to worry about the distinction. But here’s what makes Everett’s claims and that SA article (written by two academics, Paul Ibbotson and Michael Tomasello) so frustrating: They should know better. Not because their day-to-day research projects require them to know better, but because when you claim that someone said something incorrect, you had better know what it was they said. What’s more, when the recursion/embedding distinction is mistakenly blurred by those who reject the UG hypothesis, Generative linguists are quick to correct the mistake.

So, being trained academic researchers, Everett, Ibbotson and Tomasello should have researched the claims they intended to rebut. But even if they missed or misinterpreted something in Chomsky’s writing, a number of Generative linguists have already clarified their mistake. Why, then, do they persist in misrepresenting the claims they are trying to rebut?

I’m certain there are more irritating articles and books proclaiming the death of UG coming, but in the meantime, in vibrant communities like the UofT Linguistics Department, Professors, Post-docs and Grad students will continue to investigate UG, with every tool at our disposal. We’ll gather data from remote villages and diaspora communities in our cities. We’ll run statistical analyses on corpora and develop formal models. We’ll present our findings and debate proposals. And in doing so, we hope to continually better our understanding the deep properties of our species’ unique ability to acquire, comprehend and use language.

Edit: I’ve removed the assertion that that mathematicians and computer scientists share the definition of recursion given here.