Bad omens for generative syntax

In the last few weeks there have been a couple of posts in the generative linguistics blogosphere that don’t bode well for the field.

The first is the sad announcement from Omer Preminger that he is leaving academia in order to live in the same town as his wife. This news is also rather shocking, since Preminger is a fairly prominent syntactician—someone whose work, though I didn’t always agree with it, had to be addressed seriously—and if a scholar of his prominence and ability can’t negotiate something as reasonable as a spousal hire, what hope does anyone else have in having a life and an academic career too. I’m just a sessional lecturer, so treating me like a robot is still the norm, but to hear that faculty members are also expected to be robots, is disconcerting to be sure.

Omer promises more reflections on his time in academia, which I will read with some interest when it comes out, but I am sorry to see him leaving academia.

The second concerning report comes from Chris Collins. Collins, it seems, applied to some of the same tenured/tenure-track jobs as me this past year, and got the same boilerplate rejection emails as me. That a tenured professor is in the same job market as me is not especially surprising, It should be surprising that no university wanted to hire him, since he not only has a fairly strong empirical program, but he’s made important contributions to syntactic theory—while the idea of label-free syntax is commonly attributed to Chomsky (2013; 2015), he cites Collins for the idea, and slightly more recently, Collins’ work with Ed Stabler formalizing minimalist syntax in a few ways predicted Chomsky’s most recent move to workspace-based MERGE, and on a personal note, has been an invaluable resource to my work.

Collins’ explanation of his unsuccessful applications is twofold and both parts suggest bad trends in generative syntax.

The first explanation is one that I gather is common across all academic fields[1]At least those fields that modern capitalism deems useless.—department budgets are too tight to hire a senior scholar like Collins, when junior candidates are available and cheaper. Collins is probably right on this, but unfortunately commenting on the last war. While it’s probably true that junior hires are preferred over senior hires for budgetary reasons, junior tenure-track faculty are not the floor. Why hire an expensive faculty member who you have to provide with an office, a small research budget, and a pension, when you can hire a few precarious adjuncts for cheaper?

As an aside of sorts, I remember having arguments in grad school with my fellow grad students about whether our department should hire tenured faculty away from other departments. The standing wisdom was that that was the trajectory—smaller departments hired junior faculty, and once they’d proved themselves they’d move on to bigger and better places, opening up a spot at their old place. There was a feeling that, sure, there was no growth in faculty positions, but they were at least going to replace faculty that left or retired. I was skeptical of that line. University administrators had adopted the neoliberal model almost entirely—The Market reigned supreme—and The Market was clear: Linguistics, along with the broader humanities, was useless, so why not take every opportunity to squeeze those useless departments, say, by delaying replacement hires.

All of this is to say that I think Collins has identified a trend, but not a new one. The lower echelons of academia have been enduring this trend for some time now. Perhaps now that it’s reaching the upper echelons, we can see about stopping or reversing it … perhaps.

Collins’ second explanation is that, while he has made valuable contributions in recent years, the field doesn’t appreciate those contributions, and I think he might add the qualifier “yet” to that assessment. Again, I think he’s correct, and he’s identified a trend that I first saw identified by Norbert Hornstein, namely that much of what we call “theoretical syntax” is actually empirical/analytical work. This trend, I think, has morphed first to the point where so-called theoretical syntacticians were puzzled by actual theoretical work, then to the point where they are hostile to it. I suspect Collins has been a victim—though he in no way frames himself as a victim—of this hostility.

So, while there is a decided difference in degree between these two career setbacks, I think they are both part of the same trend, a trend which has been affecting more marginalized and vulnerable parts of academia for some time. The fact that this trend is now directly affecting established generative syntacticians should make the field as a whole take notice. At least I hope it does.

Notes

Notes
1 At least those fields that modern capitalism deems useless.

On pop-culture and our appetite for complexity

(A slightly edited version of a series of posts on Twitter)

There’s something to this take by Dan O’Sullivan, but I actually think part of the appeal of Marvel movies etc. is that they’re complex. In fact, I think one of the defining characteristics of popular 21st century film/TV is complexity.

A tweet from Dan O’Sullivan (@osullyville)

Lost, Game of Thrones, the MCU, Star Wars, they’re all complicated world-building exercises, and that’s what people love about them. They revel in the web of plot and characters.

It reminds me of an observation that Chomsky made once about sports talk radio:

When I’m driving, I sometimes turn on the radio and I find very often that what I’m listening to is a discussion of sports. These are telephone conversations. People call in and have long and intricate discussions, and it’s plain that quite a high degree of thought and analysis is going into that. People know a tremendous amount. They know all sorts of complicated details and enter into far-reaching discussion about whether the coach made the right decision yesterday and so on. These are ordinary people, not professionals, who are applying their intelligence and analytic skills in these areas and accumulating quite a lot of knowledge and, for all I know, understanding. On the other hand, when I hear people talk about, say, international affairs or domestic problems, it’s at a level of superficiality that’s beyond belief.

Noam Chomsky: Why Americans Know So Much About Sports But So Little About World Affairs

The people who call in to these shows are not necessarily highly educated, but they’re able to give very sophisticated and well-thought-out analysis of baseball or hockey, or whatever, but ask the average person, even a well-educated person about world affairs, and you’ll get some very shallow platitudes. People are smart. They like understanding complex things. And, more importantly, they like debating and engaging with complexity.

The governing principle of most “democracies,” though is that the political and business bosses do the thinking, and the rest of us should butt out.

Any attempt on our part to engage with, debate, or affect anything that matters is met with ridicule at best and tear-gas, truncheons, or bullets at worst.

So, the MCU didn’t make us dumb. It merely absorbed our natural impulse to engage with complexity, and, in doing so, distracted us from the complexity that really matters.

Coming back to O’Sullivan’s point: With complex works of fiction created by massive corporations, the choice of which aspects are simple and which are complex is up to their creators. So naturally, they’ll make those choices according to their own interests.

Conflict is between individual heroes and villains, and we can identify with or revile them, but certainly not the mass of people threatened by the villains or defended by the heroes.

Video essayist Evan Puschak, AKA The Nerdwriter, gives a similar analysis:

Of course, there’s another question lurking: Don’t the more artsy films serve the same function? Doesn’t SILENCE or THE LIGHTHOUSE just distract us from the real problems too? Maybe, but, if it’s done well, I think not.

I think the key ingredient of fiction that subverts that function is ambiguity. World-building fiction presents a complete closed system. nothing in or out. Ambiguity forces us to actively interpret, and to do so under uncertainty.

To resolve such ambiguity, we have to bring our experience (of the real world) into the fiction, and that necessarily means examining our own experience, to some extent.

It doesn’t give us the tools to understand geopolitics, it gives us the tools to be okay with the ambiguity.

Originally tweeted by Dan Milway (@thrilway) on March 1, 2022.

A Protest

We want our freedom.
Not the fearful freedom
Hurled, an epithet against my neighbour.
Not the freedom to do what I will
Hang the consequences.
Not the freedom of mine vs yours.
But, the broad freedom to be 
With one another again,
To care for each other,
To imagine and make our future
A brighter one together.
The freedom of us.
The freedom denied 
To so many by this plague,
To every prisoner locked away,
To those you'd rather not see.
The freedom that the workers check
Every day to punch the clock.
We demand our freedom
From those that sacrifice the ill
From those that fill the jails
From those that own the clocks
And use their implements to scare
Us into playing their malicious tune.
We need our freedom.
To protect what matters
Each other
Our homes
Our home
We'll take our freedom.

Some idle thoughts on the arguments for semantic externalism/internalism

This semester I’m teaching an intro semantics course for the first time and I decided to use Saeed’s Semantics as a textbook. Its seems like a good textbook; it gives a good survey of all the modern approaches to semantics—internalist, externalist, even so-called cognitive semantics—though the externalist bias is clear if you know what to look for. For instance, the text is quick to bring up the famous externalist thought experiments—Putnam’s robotic cats, Quine’s gavagai, etc—to undercut the internalist approaches, but doesn’t really seem to present the internalist critiques and counterarguments. So, I’ve been striving to correct that in my lectures.

While I was preparing my most recent lecture, something struck me. More precisely, I was suddenly able to put words to something that’s bothered me for a while about the whole debate: The externalist case is strongest for natural kinds, but the internalist case is strongest for human concepts. Putnam talks about cats and water, Kripke talks about tigers and gold, while Katz talks about bachelors and sometimes artifacts. This is not to say that the arguments on either side are unanswerable—Chomsky, I think has provided pretty good arguments that even, for natural kinds, our internal concepts are quite complicated, and there are many thorny issues for internalist approaches too—but they do have slightly different empirical bases, which no doubt inform their approach—if your theory can handle artifact concepts really well, you might be tempted to treat everything that way.

I don’t quite know what to make of this observation yet, but I wanted to write it down before I forgot about it.


There’s also a potential, but maybe half-baked, political implication to this observation. Natural kinds, are more or less constant in that, while they can be tamed and used by humans, we can’t really change them that much, and thinking that you can, say, turn lead into gold would mark you as a bit of a crackpot. Artifacts and social relations, on the other hand, are literally created by free human action. If you view the world with natural kinds at the center, you may be led to the view that the world has its own immutable laws that we can maybe harness, maybe adapt to, but never change.

If, on the other hand, your theory centers artifacts and social relations, then you might be led to the conclusion, as expressed by the late David Graeber, that “the ultimate hidden truth of the world is that it is something we make and could just as easily make differently.”

But, of course, I’m just speculating here.

Unmoored theory

I’ve written before about the dichotomy of descriptive vs theoretical sciences, but I’ve recently noticed another apparent dichotomy within theoretical sciences—expansionary vs focusing sciences. Expansionary sciences are those whose domain tends to expand—(neo)classical economics seems to claim all human interaction in its domain; formal semantics now covers pragmatics, hand gestures, and monkey communication—while focusing sciences tend to rather constant domain or even a shrinking one—chemistry today is about pretty much the same things as it was in the 17th century; generative syntactic theory is still about the language faculty. Assuming this is true,[1]It’s pretty much a tautology that a science’s domain will either grow, stay constant, or shrink over time the question is, whether it reflects some underlying difference between these sciences. I’d like to argue that the distinction follows from how firm its foundations are, and in particular what I’ll call its empirical conjecture.

Every scientific theory, I think, basically takes the form of a conjoined sentence “There are these things/phenomena in the world and they act like this.” The second conjunct is the formal system that give a theory its deductive power. The first conjunct is the empirical conjecture, and it turns the deductions of the formal system into predictions. While every science that progresses does so by positing new sorts of invisible entities, categories, etc., they all start with more or less familiar entities, categories, etc.—planets, metals, persons, etc. This link to the familiar, is the empirical foundation of a science. Sciences with a firm foundation are those whose empirical conjecture can be uncontroversially explained to a lay person or even an expert critic operating in good faith.

Contemporaries of, say, Robert Boyle might have thought the notion of corpuscles insanity, but they wouldn’t disagree that matter exists, exists in different forms, and that some of those forms interact in regular ways. Even the fiercest critic of UG, provided they are acting in good faith, would acknowledge that humans have a capacity for language and that that capacity probably has to do with our brains.

The same, I think, cannot be said about (neo)classical economics or formal semantics.[2]Now obviously, there’s a big difference between the two fields—neoclassical economics is extremely useful to the rich and powerful since it let’s them justify just about any … Continue reading Classical economics starts with the conjecture that there are these members of the species homo economicus—the perfectly rational, self-interested, utility maximizing agent—and derives theorems from there. This is obviously a bad characterization of humans. It is simultaneously too dim of a view of humans—we behave altruistically and non-individualistically all the time—and one that gives us far too much credit—we are far from perfectly rational. Formal semantics, on the other hand, starts with the conjecture that meaning is reference—that words have meaning only insofar as they refer to things in the world. While not as obviously false as the homo economicus conjecture, the referentialist conjecture is still false—most words, upon close inspection, do not refer[3]I could point you to my own writing on this, the works of Jerrold Katz, and arguments from Noam Chomsky on referentialsm, or I could point out that one of the godfathers of referentialism, Ludwig … Continue reading, and there is a whole universe of meaning that has little to do with reference.

Most economists and semanticists would no doubt object to what the previous paragraph says about their discipline, and the objections would take one of two forms. Either they would defend homo economicus/referentialism, or they would downplay the importance of the conjecture in question—“Homo economicus is just a useful teaching tool for undergrads. No one takes it seriously anymore!”[4]Though, as the late David Graeber pointed out, economists never object when homo economicus is discussed in a positive light. “Semanticists don’t mean reference literally, we use model theory!”—and it’s this sort of response that I think can explain the expansionary behaviour of these disciplines. Suppose we take these objections to be honest expressions of what people in the field believe—that economics isn’t about homo economicus and formal semantics isn’t about reference. Well then, what are they about? The rise of behavioural economics suggests that economists are still looking for a replacement model of human agency, and model theory is basically just reference delayed.

The theories, then, seem to be about nothing at all—or at least nothing that exists in the real world—and as a result, they can be about anything at all—they are unmoored.

Furthermore, there’s an incentive to expand your domain when possible. A theory of nothing obviously can’t be justified by giving any sort of deep explanation of any one aspect of nature, so it has to be justified by appearing to offer explanations to a breadth of topics. Neoclassical economics can’t seem to predict when a bubble will burst, or what will cause inflation, but it can give what looks like insight into family structures. Formal semantics can’t explain why “That pixel is red and green.” is contradictory, but it provides a formal language to translate pragmatics into.

There’s a link here to my past post about falsification, because just as a theory about nothing can be a theory about anything, a theory about nothing cannot be false. So, watch out—if your empirical domain seems to be expanding, you might not be doing science any more.

Notes

Notes
1 It’s pretty much a tautology that a science’s domain will either grow, stay constant, or shrink over time
2 Now obviously, there’s a big difference between the two fields—neoclassical economics is extremely useful to the rich and powerful since it let’s them justify just about any horrendous crimes they would want to commit in the name of expanding their wealth and power, while formal semantics is a subdiscipline of a minor oddball discipline on the boundaries of humanities, social science, and cognitive science. But I’m a linguist, and I think mostly linguists read this.
3 I could point you to my own writing on this, the works of Jerrold Katz, and arguments from Noam Chomsky on referentialsm, or I could point out that one of the godfathers of referentialism, Ludwig Wittgenstein, seems to have repudiated it in his later work.
4 Though, as the late David Graeber pointed out, economists never object when homo economicus is discussed in a positive light.

What does falsification look like anyway?

Vulcan vs Neptune

There’s an argument that plays out every so often in linguistics the goes as follows:

Critic: This data falsifies theory T.
Proponent: Not necessarily, if you consider arguments X,Y, and Z.
Critic: Well, then theory T seems to be unfalsifiable!

This is obviously a specious argument on the part of the critic, since unfalsified does not entail unfalsifiable, but I think it stems from a very understandable frustration—theorists often have an uncanny ability to wriggle free of data that appears to falsify their theories, even though falsificationism is assumed by a large majority of linguists. The problem is that the logic falsificationism, while being quite sound, maybe unimpeachable, turns out to be fiendishly difficult to apply.

At its simplest, the logic of falsificationism says that a theory is scientific insofar as one can construct a basic statementi.e., a statement of fact—that would contradict the theory. This, of course, is an oversimplification of Karl Popper’s idea of Critical Rationalism in a number of ways. For one, falsifiability is not an absolute notion. Rather, we can compare the relative falsifiability of two theories by looking at what Popper calls their empirical content—the number of basic statements that would contradict them. So if a simple theoretical statement P has a particular empirical content, then the conjunction P & Q will have a greater empirical content, and the disjunction P v Q will have a lesser empirical content. This is a useful heuristic when constructing or criticizing a theory internally, and seems like a straightforward guide to testing theories empirically. Historically, though, this is not the case, largely because it is often difficult to recognize when we’ve arrived at and accurately formulated a falsifying fact. In fact, it is often, maybe always, the case that we don’t recognize a falsifying fact as such until after one theory has been superseded by another.

Take for instance the case of the respective orbits of Mercury and Uranus. By the 19th century, Newtonian mechanics had allowed astronomers to make very precise predictions about the rotations of the planets, and based on those predictions, there was a problem: two of the planets were misbehaving. First, it was discovered that Uranus—then the last known planet from the sun—wasn’t showing up where it should have been. Basically, Newton’s mechanics predicted that on such and so day and time Uranus would be in a particular spot in the sky, but the facts were otherwise. Rather than cry “falsification!”, though, the astronomers of the day hypothesized an object on the other side of Uranus that was affecting its orbit. One such astronomer, Urbain Le Verrier was even able to work backwards and predict where that object could be found. So in September of 1846, armed with Le Verrier’s calculations, Johann Gottfried Galle, was able to observe an eighth planet—Neptune. Thus, an apparent falsification became corroboration.

Urbain Le Verrier (1811-1877)
Johann Galle (1812-1910)

I’ve previously written about this story as a vindication of the theory first approach to science. What I didn’t write about, and what is almost never discussed in this context is Le Verrier’s work on the misbehaving orbit of Mercury. Again, armed with Newton’s precise mechanics, Le Verrier calculated the Newtonian prediction for Mercury’s orbit, and again[1]Technically though, Le Verrier’s work on Mercury predated his work on Uranus Mercury didn’t behave as expected. Again, rather than throw out Newtonian mechanics, Le Verrier hypothesized the planet Vulcan between Mercury and the sun, and set about trying to observe it. While many people claimed to observe Vulcan, none of these observations were reliably replicated. Le Verrier was undeterred, though, perhaps because observing a planet that close to the sun was quite tricky. Of course, it would be easy to paint Le Verrier as an eccentric—indeed, his Vulcan hypothesis is somewhat downplayed in his legacy—but he doesn’t seem to have been treated so by his contemporaries. The Vulcan hypothesis wasn’t universally believed, but neither does it seem to be the Flat-Earth theory of its day.

It was only when Einstein used his General Theory of Relativity to accurately calculate Mercury’s orbit, that the scientific community seems to have abandoned the search for Vulcan. Mercury’s orbit is now considered a classical successful test of General Relativity, but why don’t we consider it a refutation of Newtonian Mechanics? Strict falsificationism would seem to dictate that, but then a strict falsificationist would have thrown out Newtonian Mechanics as soon as we noticed Uranus misbehaving. So, falsificationism of this sort leads us to something of a paradox—if a single basic statement contradicts a theory, there’s no way of knowing if there is some second basic statement that, in conjunction with the first, could save the theory.

Still, it’s difficult to toss out falsification entirely, because a theory that doesn’t reflect reality, may be interesting but isn’t scientific.[2]Though sometimes, theories which seem to be empirically idle end up being scientifically important (cf. non-Euclidean geometry) Also, any reasonable person who has ever tried to give an explanation to any phenomenon, probably rejects most of their own ideas rather quickly on empirical bases. We should instead adopt falsificationism as a relative notion—use it when comparing multiple theories. So, Le Verrier was ultimately wrong, but acted reasonably—he had a pretty good theory of mechanics so he worked to reconcile it with some problematic data. Had someone developed General Relativity in Le Verrier’s time, then it would have been unreasonable to insist that a hypothesized planet was a better explanation than an improved theory.

Returning to the hypothetical debate between the Critic and the Proponent, then, I think a reasonable albeit slightly rude response for the proponent would be “Well, do you have a better theory?”

Notes

Notes
1 Technically though, Le Verrier’s work on Mercury predated his work on Uranus
2 Though sometimes, theories which seem to be empirically idle end up being scientifically important (cf. non-Euclidean geometry)

Chris Collins interviews Noam Chomsky about formal semantics

Over on his blog, Chis Collins has posted the text of a conversation he had over email with Noam Chomsky on the topic of formal semantics. While Chomsky has been very open about his views on semantics for a long time, this interview is worth reading for working linguists because Collins frames the conversation around work by linguists—Heim & Kratzer, and Larson & Segal—rather than philosophers—Quine, Austin, Wittgenstein, Frege, et al.

You should read it for yourself, but I’d like to highlight one passage that jumped out at me. Of the current state of the field, Chomsky says:

Work in formal semantics has been some of the most exciting parts of the field in recent years, but it hasn’t been treated with the kind of critical analysis that other parts of syntax (including generative phonology) have been within generative grammar since its origins. Questions about explanatory power, simplicity, learnability, generality, evolvability, and so. More as a descriptive technology. That raises questions.

p 5. (emphasis mine)

It’s true that formal semantics today is a vibrant field. There’s always new analyses, The methods of formal semantics are being applied to new sets of data, and, indeed, it’s virtually impossible to even write a paper on syntax without a bit of formal semantics. Yet it is also true that almost no one has been thinking about the theory underpinning the analytical technology. As a result, I don’t think many working semanticists are even aware that there is such a theory, or if they are aware, they tend to wave their hands, saying “that’s philosophy”. Formal semanticists, it seems, have effectively gaslit themselves.

Chomsky’s framing here is interesting, too. He could be understood as suggesting that formal semantics could engage in theoretical inquiry while maintaining its vibrancy. It’s not clear that this is the case though. Currently, formal semantics bears a striking similarity to the machine-learning/neural-nets style of AI, in that both are being applied to a very wide array of “problems” but a closer look at the respective technologies very likely would cause us to question whether they should be. Obviously, the stakes are different—no one’s ever been injured in a car crash because they used lambdas to analyze a speech act—but the principle is the same.

But I digress. Collins and Chomsky’s conversation is interesting and very accessible to anyone who familiar with Heim & Kratzer-style semantics. It’s well worth a read.

Break up Big University; Create Jobs

This argument in this article (tweeted out by Shit Academics Say) is just designed to pit one group of workers (sessional lecturers) against another (tenured faculty). This is because it ignores the fact that the number of faculty positions at least in Canada is kept artificially low.

Notice that there is no mention whatsoever of class-sizes. Coming from UofT, I can tell you, class sizes have been out of control for years. My intro bio class so big that no lecture halls could house it. Lectures were in the 1730-seat Convocation Hall.

Convocation hall is a beautiful building but it is designed for ceremony, not pedagogy. There is no chalkboard or whiteboard, and if there were, the students in the upper balcony wouldn't be able to read them. What's more the seats have no writing surface for note-taking.

More recently, I taught a "general interest" linguistics course (a "bird course") so big that it also couldn't be housed in a proper lecture hall. Instead we had what was basically a movie theatre. The lights were perpetually dimmed, and again, no chalkboard.

These sorts of non-classrooms really only allow for one type of teaching style, possibly the worst type: A lecturer droning on about a slide deck.

Beyond just the lectures, it's quite impossible for all 1000+ students of such a class to have direct access to their professors in office hours. There aren't enough hours in the day.

(Of course, most students don't go to office hours. It might make a good action, though, for student unions to organize students to go to office hours en masse. Not to shout slogans at professors, but just to ask for help)

Clearly, UofT, the largest university in Canada, has reached its capacity of students.

Imagine, though, if we kept tenure and the researcher/teacher model of academia and put hard limits on class sizes. Say, 200 for 1st yr classes, 100 for 2nd yr and so on. How would that affect things?

The neoliberal response would probably be "well, you'd have to have fewer students, probably only well-off white students."
But there's another possibility: Expand the faculty size by creating new universities.

This could mean founding a brand new university, or it could mean splitting up oversized universities. UofT, for instance, has three campuses: Downtown, Scarborough, and Mississauga. Why not spin them off from each other?

There are definitely ways to do this that I haven't thought about, and none of them are perfect, and all of them require public funding. But that's true of any societal problem.

But we can't really expect to solve the problem without an adequate diagnosis of the problem's source.
There's no shortage of qualified educators, nor is there a shortage of people who want/need an education.The problem is infrastructure.

So, whenever someone makes an argument pitting workers against workers, it can only really serve to obscure the fact that the problem is elsewhere—with management, with bureaucracy, with politicians.

Originally tweeted by Dan Milway (@thrilway) on June 26, 2021.

But it’s obvious, isn’t it?

As a linguist or, more specifically, as a theoretical syntactician, I hold and often express some minority opinions.[1]Outside of syntactic theory too Often these opinions are met with bafflement and an assertion like “We’ve known for years that that’s not the case” because of this phenomenon, or that piece of data—“Control is derived by movement? But what about de se interpretation??” “Merge is free? But what about c-selection??” “Long-distance Agree isn’t real? But what about English existential clauses??”[2]I have a hypothesis that the vehemence with which someone will defend a theory or analysis is correlated with how much they struggled to understand it in school. Basically, we’re more likely to … Continue reading These sorts of objections are often tossed out as if the data speaks for itself when really, the thing that makes scientific inquiry so tough is that the data rarely speaks for itself, and when it does, it doesn’t do so clearly.

Take, for instance, the case of English existential clauses like (1) and (2) and how they are used as absolute proof of the existence of Long-Distance Agree.

(1) There ?seems/seem to be several fish in the tank.
(2) There seems/*seem to be a fish in the tank.

In both sentences, the grammatical subject is the expletive there, but the verb agrees with a DP[3]I still think I buy the DP hypothesis, but I’m also intrigued by Chomsky’s recent rejection of it and amused by the reaction to this rejection. that appears to be structurally “lower” in the clause. Therefore, there must be some non-movement way of getting features from a lower object onto a higher object—Long-Distance Agree. This is often presented as the obvious conclusion, the only conclusion, or the simplest conclusion. “Obvious” is in the eye of the beholder and doesn’t usually mean “correct”; Norbert Hornstein, in his A Theory of Syntax proposes three alternative analyses to Long-Distance Agree; only “simplest” has legs, although that’s debatable.

Occam’s razor says “entities should not be multiplied without necessity,” and any analysis of (1) and (2) without Long-Distance Agree will have to say that in both cases, the agreeing DP is covertly in subject position. These covert subjects are argued to constitute an unnecessary multiplication of entities, but one could just as easily argue that Long-Distance Agree is an unnecessary entity. What’s more, covert movement and silent elements both have independent arguments in their favour.

Of course, the covert subject analysis of (1) and (2) is not without its flaws. Chief among them, in my opinion, is that it would seem to wrongly predict that (1) and (2) mean the same thing as (3) and (4), respectively.

(3) Three fish seem to be in the tank.
(4) A fish seems to be in the tank.

These sentences differ from (1) and (2) in that they—(3) and (4)—presuppose the existence of three fish or a single fish, while (1) and (2) merely assert it. This contrast is clearest in (5)-(8) which are examples that Chomsky has been using for several decades.

(5) There’s a fly in my soup.
(6) There’s a flaw in my argument.
(7) A fly is in my soup.
(8) *?A flaw is in my argument.

Likewise, Long-Distance Agree has its own problems, some of which I discuss in my latest paper. Indeed, it is vanishingly rare in any field of inquiry—or life itself—to find an unproblematic solution to a problem.

My goal here isn’t to argue that Long-Distance Agree is wrong,[4]Though, I do think it is. but to point out that it’s not a foregone conclusion. In fact, I think that if we listed the hypotheses/theories/notions that most syntacticians took to be (nearly) unquestionable and honestly assessed the arguments in their favours, I doubt that many would turn out to be as robust as they seem. This doesn’t mean that we need to reject every idea that less than 100% solid, just that we should hold on to them a little more loosely. As a rule, we should all carry with us the idea that we could very well be wrong about almost everything. The world’s more interesting that way.

Notes

Notes
1 Outside of syntactic theory too
2 I have a hypothesis that the vehemence with which someone will defend a theory or analysis is correlated with how much they struggled to understand it in school. Basically, we’re more likely to die on a hill if we had to fight to summit that hill. This has some interesting implications that I might get into in a later post.
3 I still think I buy the DP hypothesis, but I’m also intrigued by Chomsky’s recent rejection of it and amused by the reaction to this rejection.
4 Though, I do think it is.

New LingBuzz Paper

(or “How I’ve been spending my unemployment*”)

Yesterday I finished and posted a paper to LingBuzz. It’s titled “Agree as derivational operation: Its definition and discontents” and its abstract is given below. If it sounds interesting, have a look and let me know what you think.

Using the framework laid out by Collins and Stabler (2016), I formalize Agree as a syntactic operation. I begin by constructing a formal definition a version of long-distance Agree in which a higher object values a feature on a lower object, and modify that definition to reflect various several versions of Agree that have been proposed in the “minimalist” literature. I then discuss the theoretical implications of these formal definitions, arguing that Agree (i) muddies our understanding of the evolution of language, (ii) requires a new conception of the lexicon, (iii) objectively and significantly increases the complexity of syntactic derivations, and (iv) unjustifiably violates NTC in all its non-vacuous forms. I conclude that Agree, as it is commonly understood, should not be considered a narrowly syntactic operation.

*Thanks to the Canada Recovery Benefit, I was able to feed myself and make rent while I wrote this.