Blog

Some good news on the publication front

Today I woke up to an email from the editor of Biolinguistics informing me that my manuscript “A parallel derivation theory of adjuncts” had been accepted for publication. I was quite relieved, especially since I had been expecting some news about my submission for a couple of days—the ability to monitor the progress of submissions on a journal’s website is a decidedly mixed blessing—and there was a definite possibility in my mind that it could have been rejected.

It was also a relief because it’s been a long road with this paper. I first wrote about the kernel of its central idea—that syntactic adjuncts were entirely separate objects from their “hosts”—in my thesis, and I presented it a couple of times within the University of Toronto Linguistics Department a few times. I first realized that it had some legs when it was accepted as a talk at the 2020 LSA Meeting in New Orleans, and I started working on it in earnest in the spring and summer of 2020, submitting the first manuscript version to a different journal in August 2020.

If you follow me on Twitter, you saw my reactions to the peer-review process in real time, but it’s worth summarizing. Versions of this manuscript underwent peer-review at multiple journals and in every case there were one or two constructive reviews—some positive reviews, and some negative reviews that nevertheless pointed out serious but fixable issues—but invariably there was one reviewer who was clearly hostile to the manuscript—there was often sarcasm and vague comments.

I’m sure the manuscript improved over the various submissions, but I believe that the main reason that the paper will finally be published is because the editor of Biolinguistics, Kleanthes Grohmann, recognized and agreed with me that one of the reviewers was being unreasonable, so I definitely owe him my gratitude.

There’s more edits to go, but you can look forward to seeing my paper in Biolinguistics in the near future.

Why are some ideas so sticky? A hypothesis

Anyone who has tried to articulate a new idea or criticize old ones may have noticed that some ideas are washed away relatively easily, while others seem to actively resist even the strongest challenges—some ideas are stickier than others. In some cases, there’s an obvious reason for this stickiness—in some cases there’s even a good reason for it. Some ideas are sticky because they’ve never really been interrogated. Some are sticky because there are powerful parts of society that depend on them. Some are sticky because they’re true, or close to true. But I’ve started to think there’s another reason an idea can be sticky—the amount of mental effort people put into understanding the idea as students.

Take, for instance, X-bar theory. I don’t think there’s some powerful cabal propping it up, it’s not old enough to just be taken for granted, and Chomsky’s Problems of Projection papers showed that it was not really tenable. Yet X-bar persists. Not just in how syntacticians draw trees, or how they informally talk about them, but I remember commentary on my definition of minimal search here involved puzzlement about why I didn’t simply formalize the idea that specifiers were invisible to search followed by more puzzlement when I explained that the notion of specifier was unformulable.

In my experience, the stickiness of X-bar theory—and syntactic projection/labels more broadly—doesn’t manifest itself in an attempt to rebut arguments against it, but in attempts to save it—to reconstitute it in a theory that doesn’t include it.[1]My reading of Zeijstra’s chapter in this volume is as one such attempt This is very strange behaviour—X-bar is a theoretical construct, it’s valid insofar as it is coherent and empirically useful. Why are syntacticians fighting for it? I wondered about this for a while and then I remembered my experience learning X-bar and teaching it—it’s a real challenge. It’s probably the first challenging theoretical construct that syntax students are exposed to. It tends to be presented as a fait accompli, so students just have to learn how it functions. As a result, those students who do manage to figure it out are proud of it and defend it like someone protecting their cherished possessions.[2]I think I may be describing “effort justification,” but I’m basing this just on the Wikipedia article

Of course, it’s a bit dangerous to speculate about the psychological motivations of others, but I’m certain I’ve had this reaction in the past when someone’s challenged an idea that I at one point struggled to learn. And I’ve heard students complain about the fact that every successive level of learning syntax starts with “everything you learned last year is wrong”—or at least that’s the sense they get. So, I have a feeling there’s at least a kernel of truth to my hypothesis. Now, how do I go about testing it?


Addendum

As I was writing this, I remembered something I frequently think when I’m preparing tests and exams that I’ve thus far only formulated as a somewhat snarky question:

How much of our current linguistic theory depends on how well it lends itself to constructing problem sets and exam questions?

References

References
1 My reading of Zeijstra’s chapter in this volume is as one such attempt
2 I think I may be describing “effort justification,” but I’m basing this just on the Wikipedia article

Bad omens for generative syntax

In the last few weeks there have been a couple of posts in the generative linguistics blogosphere that don’t bode well for the field.

The first is the sad announcement from Omer Preminger that he is leaving academia in order to live in the same town as his wife. This news is also rather shocking, since Preminger is a fairly prominent syntactician—someone whose work, though I didn’t always agree with it, had to be addressed seriously—and if a scholar of his prominence and ability can’t negotiate something as reasonable as a spousal hire, what hope does anyone else have in having a life and an academic career too. I’m just a sessional lecturer, so treating me like a robot is still the norm, but to hear that faculty members are also expected to be robots, is disconcerting to be sure.

Omer promises more reflections on his time in academia, which I will read with some interest when it comes out, but I am sorry to see him leaving academia.

The second concerning report comes from Chris Collins. Collins, it seems, applied to some of the same tenured/tenure-track jobs as me this past year, and got the same boilerplate rejection emails as me. That a tenured professor is in the same job market as me is not especially surprising, It should be surprising that no university wanted to hire him, since he not only has a fairly strong empirical program, but he’s made important contributions to syntactic theory—while the idea of label-free syntax is commonly attributed to Chomsky (2013; 2015), he cites Collins for the idea, and slightly more recently, Collins’ work with Ed Stabler formalizing minimalist syntax in a few ways predicted Chomsky’s most recent move to workspace-based MERGE, and on a personal note, has been an invaluable resource to my work.

Collins’ explanation of his unsuccessful applications is twofold and both parts suggest bad trends in generative syntax.

The first explanation is one that I gather is common across all academic fields[1]At least those fields that modern capitalism deems useless.—department budgets are too tight to hire a senior scholar like Collins, when junior candidates are available and cheaper. Collins is probably right on this, but unfortunately commenting on the last war. While it’s probably true that junior hires are preferred over senior hires for budgetary reasons, junior tenure-track faculty are not the floor. Why hire an expensive faculty member who you have to provide with an office, a small research budget, and a pension, when you can hire a few precarious adjuncts for cheaper?

As an aside of sorts, I remember having arguments in grad school with my fellow grad students about whether our department should hire tenured faculty away from other departments. The standing wisdom was that that was the trajectory—smaller departments hired junior faculty, and once they’d proved themselves they’d move on to bigger and better places, opening up a spot at their old place. There was a feeling that, sure, there was no growth in faculty positions, but they were at least going to replace faculty that left or retired. I was skeptical of that line. University administrators had adopted the neoliberal model almost entirely—The Market reigned supreme—and The Market was clear: Linguistics, along with the broader humanities, was useless, so why not take every opportunity to squeeze those useless departments, say, by delaying replacement hires.

All of this is to say that I think Collins has identified a trend, but not a new one. The lower echelons of academia have been enduring this trend for some time now. Perhaps now that it’s reaching the upper echelons, we can see about stopping or reversing it … perhaps.

Collins’ second explanation is that, while he has made valuable contributions in recent years, the field doesn’t appreciate those contributions, and I think he might add the qualifier “yet” to that assessment. Again, I think he’s correct, and he’s identified a trend that I first saw identified by Norbert Hornstein, namely that much of what we call “theoretical syntax” is actually empirical/analytical work. This trend, I think, has morphed first to the point where so-called theoretical syntacticians were puzzled by actual theoretical work, then to the point where they are hostile to it. I suspect Collins has been a victim—though he in no way frames himself as a victim—of this hostility.

So, while there is a decided difference in degree between these two career setbacks, I think they are both part of the same trend, a trend which has been affecting more marginalized and vulnerable parts of academia for some time. The fact that this trend is now directly affecting established generative syntacticians should make the field as a whole take notice. At least I hope it does.

References

References
1 At least those fields that modern capitalism deems useless.

On pop-culture and our appetite for complexity

(A slightly edited version of a series of posts on Twitter)

There’s something to this take by Dan O’Sullivan, but I actually think part of the appeal of Marvel movies etc. is that they’re complex. In fact, I think one of the defining characteristics of popular 21st century film/TV is complexity.

A tweet from Dan O’Sullivan (@osullyville)

Lost, Game of Thrones, the MCU, Star Wars, they’re all complicated world-building exercises, and that’s what people love about them. They revel in the web of plot and characters.

It reminds me of an observation that Chomsky made once about sports talk radio:

When I’m driving, I sometimes turn on the radio and I find very often that what I’m listening to is a discussion of sports. These are telephone conversations. People call in and have long and intricate discussions, and it’s plain that quite a high degree of thought and analysis is going into that. People know a tremendous amount. They know all sorts of complicated details and enter into far-reaching discussion about whether the coach made the right decision yesterday and so on. These are ordinary people, not professionals, who are applying their intelligence and analytic skills in these areas and accumulating quite a lot of knowledge and, for all I know, understanding. On the other hand, when I hear people talk about, say, international affairs or domestic problems, it’s at a level of superficiality that’s beyond belief.

Noam Chomsky: Why Americans Know So Much About Sports But So Little About World Affairs

The people who call in to these shows are not necessarily highly educated, but they’re able to give very sophisticated and well-thought-out analysis of baseball or hockey, or whatever, but ask the average person, even a well-educated person about world affairs, and you’ll get some very shallow platitudes. People are smart. They like understanding complex things. And, more importantly, they like debating and engaging with complexity.

The governing principle of most “democracies,” though is that the political and business bosses do the thinking, and the rest of us should butt out.

Any attempt on our part to engage with, debate, or affect anything that matters is met with ridicule at best and tear-gas, truncheons, or bullets at worst.

So, the MCU didn’t make us dumb. It merely absorbed our natural impulse to engage with complexity, and, in doing so, distracted us from the complexity that really matters.

Coming back to O’Sullivan’s point: With complex works of fiction created by massive corporations, the choice of which aspects are simple and which are complex is up to their creators. So naturally, they’ll make those choices according to their own interests.

Conflict is between individual heroes and villains, and we can identify with or revile them, but certainly not the mass of people threatened by the villains or defended by the heroes.

Video essayist Evan Puschak, AKA The Nerdwriter, gives a similar analysis:

Of course, there’s another question lurking: Don’t the more artsy films serve the same function? Doesn’t SILENCE or THE LIGHTHOUSE just distract us from the real problems too? Maybe, but, if it’s done well, I think not.

I think the key ingredient of fiction that subverts that function is ambiguity. World-building fiction presents a complete closed system. nothing in or out. Ambiguity forces us to actively interpret, and to do so under uncertainty.

To resolve such ambiguity, we have to bring our experience (of the real world) into the fiction, and that necessarily means examining our own experience, to some extent.

It doesn’t give us the tools to understand geopolitics, it gives us the tools to be okay with the ambiguity.

Originally tweeted by Dan Milway (@thrilway) on March 1, 2022.

A Protest

We want our freedom.
Not the fearful freedom
Hurled, an epithet against my neighbour.
Not the freedom to do what I will
Hang the consequences.
Not the freedom of mine vs yours.
But, the broad freedom to be 
With one another again,
To care for each other,
To imagine and make our future
A brighter one together.
The freedom of us.
The freedom denied 
To so many by this plague,
To every prisoner locked away,
To those you'd rather not see.
The freedom that the workers check
Every day to punch the clock.
We demand our freedom
From those that sacrifice the ill
From those that fill the jails
From those that own the clocks
And use their implements to scare
Us into playing their malicious tune.
We need our freedom.
To protect what matters
Each other
Our homes
Our home
We'll take our freedom.

Some idle thoughts on the arguments for semantic externalism/internalism

This semester I’m teaching an intro semantics course for the first time and I decided to use Saeed’s Semantics as a textbook. Its seems like a good textbook; it gives a good survey of all the modern approaches to semantics—internalist, externalist, even so-called cognitive semantics—though the externalist bias is clear if you know what to look for. For instance, the text is quick to bring up the famous externalist thought experiments—Putnam’s robotic cats, Quine’s gavagai, etc—to undercut the internalist approaches, but doesn’t really seem to present the internalist critiques and counterarguments. So, I’ve been striving to correct that in my lectures.

While I was preparing my most recent lecture, something struck me. More precisely, I was suddenly able to put words to something that’s bothered me for a while about the whole debate: The externalist case is strongest for natural kinds, but the internalist case is strongest for human concepts. Putnam talks about cats and water, Kripke talks about tigers and gold, while Katz talks about bachelors and sometimes artifacts. This is not to say that the arguments on either side are unanswerable—Chomsky, I think has provided pretty good arguments that even, for natural kinds, our internal concepts are quite complicated, and there are many thorny issues for internalist approaches too—but they do have slightly different empirical bases, which no doubt inform their approach—if your theory can handle artifact concepts really well, you might be tempted to treat everything that way.

I don’t quite know what to make of this observation yet, but I wanted to write it down before I forgot about it.


There’s also a potential, but maybe half-baked, political implication to this observation. Natural kinds, are more or less constant in that, while they can be tamed and used by humans, we can’t really change them that much, and thinking that you can, say, turn lead into gold would mark you as a bit of a crackpot. Artifacts and social relations, on the other hand, are literally created by free human action. If you view the world with natural kinds at the center, you may be led to the view that the world has its own immutable laws that we can maybe harness, maybe adapt to, but never change.

If, on the other hand, your theory centers artifacts and social relations, then you might be led to the conclusion, as expressed by the late David Graeber, that “the ultimate hidden truth of the world is that it is something we make and could just as easily make differently.”

But, of course, I’m just speculating here.

Unmoored theory

I’ve written before about the dichotomy of descriptive vs theoretical sciences, but I’ve recently noticed another apparent dichotomy within theoretical sciences—expansionary vs focusing sciences. Expansionary sciences are those whose domain tends to expand—(neo)classical economics seems to claim all human interaction in its domain; formal semantics now covers pragmatics, hand gestures, and monkey communication—while focusing sciences tend to rather constant domain or even a shrinking one—chemistry today is about pretty much the same things as it was in the 17th century; generative syntactic theory is still about the language faculty. Assuming this is true,[1]It’s pretty much a tautology that a science’s domain will either grow, stay constant, or shrink over time the question is, whether it reflects some underlying difference between these sciences. I’d like to argue that the distinction follows from how firm its foundations are, and in particular what I’ll call its empirical conjecture.

Every scientific theory, I think, basically takes the form of a conjoined sentence “There are these things/phenomena in the world and they act like this.” The second conjunct is the formal system that give a theory its deductive power. The first conjunct is the empirical conjecture, and it turns the deductions of the formal system into predictions. While every science that progresses does so by positing new sorts of invisible entities, categories, etc., they all start with more or less familiar entities, categories, etc.—planets, metals, persons, etc. This link to the familiar, is the empirical foundation of a science. Sciences with a firm foundation are those whose empirical conjecture can be uncontroversially explained to a lay person or even an expert critic operating in good faith.

Contemporaries of, say, Robert Boyle might have thought the notion of corpuscles insanity, but they wouldn’t disagree that matter exists, exists in different forms, and that some of those forms interact in regular ways. Even the fiercest critic of UG, provided they are acting in good faith, would acknowledge that humans have a capacity for language and that that capacity probably has to do with our brains.

The same, I think, cannot be said about (neo)classical economics or formal semantics.[2]Now obviously, there’s a big difference between the two fields—neoclassical economics is extremely useful to the rich and powerful since it let’s them justify just about any … Continue reading Classical economics starts with the conjecture that there are these members of the species homo economicus—the perfectly rational, self-interested, utility maximizing agent—and derives theorems from there. This is obviously a bad characterization of humans. It is simultaneously too dim of a view of humans—we behave altruistically and non-individualistically all the time—and one that gives us far too much credit—we are far from perfectly rational. Formal semantics, on the other hand, starts with the conjecture that meaning is reference—that words have meaning only insofar as they refer to things in the world. While not as obviously false as the homo economicus conjecture, the referentialist conjecture is still false—most words, upon close inspection, do not refer[3]I could point you to my own writing on this, the works of Jerrold Katz, and arguments from Noam Chomsky on referentialsm, or I could point out that one of the godfathers of referentialism, Ludwig … Continue reading, and there is a whole universe of meaning that has little to do with reference.

Most economists and semanticists would no doubt object to what the previous paragraph says about their discipline, and the objections would take one of two forms. Either they would defend homo economicus/referentialism, or they would downplay the importance of the conjecture in question—“Homo economicus is just a useful teaching tool for undergrads. No one takes it seriously anymore!”[4]Though, as the late David Graeber pointed out, economists never object when homo economicus is discussed in a positive light. “Semanticists don’t mean reference literally, we use model theory!”—and it’s this sort of response that I think can explain the expansionary behaviour of these disciplines. Suppose we take these objections to be honest expressions of what people in the field believe—that economics isn’t about homo economicus and formal semantics isn’t about reference. Well then, what are they about? The rise of behavioural economics suggests that economists are still looking for a replacement model of human agency, and model theory is basically just reference delayed.

The theories, then, seem to be about nothing at all—or at least nothing that exists in the real world—and as a result, they can be about anything at all—they are unmoored.

Furthermore, there’s an incentive to expand your domain when possible. A theory of nothing obviously can’t be justified by giving any sort of deep explanation of any one aspect of nature, so it has to be justified by appearing to offer explanations to a breadth of topics. Neoclassical economics can’t seem to predict when a bubble will burst, or what will cause inflation, but it can give what looks like insight into family structures. Formal semantics can’t explain why “That pixel is red and green.” is contradictory, but it provides a formal language to translate pragmatics into.

There’s a link here to my past post about falsification, because just as a theory about nothing can be a theory about anything, a theory about nothing cannot be false. So, watch out—if your empirical domain seems to be expanding, you might not be doing science any more.

References

References
1 It’s pretty much a tautology that a science’s domain will either grow, stay constant, or shrink over time
2 Now obviously, there’s a big difference between the two fields—neoclassical economics is extremely useful to the rich and powerful since it let’s them justify just about any horrendous crimes they would want to commit in the name of expanding their wealth and power, while formal semantics is a subdiscipline of a minor oddball discipline on the boundaries of humanities, social science, and cognitive science. But I’m a linguist, and I think mostly linguists read this.
3 I could point you to my own writing on this, the works of Jerrold Katz, and arguments from Noam Chomsky on referentialsm, or I could point out that one of the godfathers of referentialism, Ludwig Wittgenstein, seems to have repudiated it in his later work.
4 Though, as the late David Graeber pointed out, economists never object when homo economicus is discussed in a positive light.

What does falsification look like anyway?

Vulcan vs Neptune

There’s an argument that plays out every so often in linguistics the goes as follows:

Critic: This data falsifies theory T.
Proponent: Not necessarily, if you consider arguments X,Y, and Z.
Critic: Well, then theory T seems to be unfalsifiable!

This is obviously a specious argument on the part of the critic, since unfalsified does not entail unfalsifiable, but I think it stems from a very understandable frustration—theorists often have an uncanny ability to wriggle free of data that appears to falsify their theories, even though falsificationism is assumed by a large majority of linguists. The problem is that the logic falsificationism, while being quite sound, maybe unimpeachable, turns out to be fiendishly difficult to apply.

At its simplest, the logic of falsificationism says that a theory is scientific insofar as one can construct a basic statementi.e., a statement of fact—that would contradict the theory. This, of course, is an oversimplification of Karl Popper’s idea of Critical Rationalism in a number of ways. For one, falsifiability is not an absolute notion. Rather, we can compare the relative falsifiability of two theories by looking at what Popper calls their empirical content—the number of basic statements that would contradict them. So if a simple theoretical statement P has a particular empirical content, then the conjunction P & Q will have a greater empirical content, and the disjunction P v Q will have a lesser empirical content. This is a useful heuristic when constructing or criticizing a theory internally, and seems like a straightforward guide to testing theories empirically. Historically, though, this is not the case, largely because it is often difficult to recognize when we’ve arrived at and accurately formulated a falsifying fact. In fact, it is often, maybe always, the case that we don’t recognize a falsifying fact as such until after one theory has been superseded by another.

Take for instance the case of the respective orbits of Mercury and Uranus. By the 19th century, Newtonian mechanics had allowed astronomers to make very precise predictions about the rotations of the planets, and based on those predictions, there was a problem: two of the planets were misbehaving. First, it was discovered that Uranus—then the last known planet from the sun—wasn’t showing up where it should have been. Basically, Newton’s mechanics predicted that on such and so day and time Uranus would be in a particular spot in the sky, but the facts were otherwise. Rather than cry “falsification!”, though, the astronomers of the day hypothesized an object on the other side of Uranus that was affecting its orbit. One such astronomer, Urbain Le Verrier was even able to work backwards and predict where that object could be found. So in September of 1846, armed with Le Verrier’s calculations, Johann Gottfried Galle, was able to observe an eighth planet—Neptune. Thus, an apparent falsification became corroboration.

Urbain Le Verrier (1811-1877)
Johann Galle (1812-1910)

I’ve previously written about this story as a vindication of the theory first approach to science. What I didn’t write about, and what is almost never discussed in this context is Le Verrier’s work on the misbehaving orbit of Mercury. Again, armed with Newton’s precise mechanics, Le Verrier calculated the Newtonian prediction for Mercury’s orbit, and again[1]Technically though, Le Verrier’s work on Mercury predated his work on Uranus Mercury didn’t behave as expected. Again, rather than throw out Newtonian mechanics, Le Verrier hypothesized the planet Vulcan between Mercury and the sun, and set about trying to observe it. While many people claimed to observe Vulcan, none of these observations were reliably replicated. Le Verrier was undeterred, though, perhaps because observing a planet that close to the sun was quite tricky. Of course, it would be easy to paint Le Verrier as an eccentric—indeed, his Vulcan hypothesis is somewhat downplayed in his legacy—but he doesn’t seem to have been treated so by his contemporaries. The Vulcan hypothesis wasn’t universally believed, but neither does it seem to be the Flat-Earth theory of its day.

It was only when Einstein used his General Theory of Relativity to accurately calculate Mercury’s orbit, that the scientific community seems to have abandoned the search for Vulcan. Mercury’s orbit is now considered a classical successful test of General Relativity, but why don’t we consider it a refutation of Newtonian Mechanics? Strict falsificationism would seem to dictate that, but then a strict falsificationist would have thrown out Newtonian Mechanics as soon as we noticed Uranus misbehaving. So, falsificationism of this sort leads us to something of a paradox—if a single basic statement contradicts a theory, there’s no way of knowing if there is some second basic statement that, in conjunction with the first, could save the theory.

Still, it’s difficult to toss out falsification entirely, because a theory that doesn’t reflect reality, may be interesting but isn’t scientific.[2]Though sometimes, theories which seem to be empirically idle end up being scientifically important (cf. non-Euclidean geometry) Also, any reasonable person who has ever tried to give an explanation to any phenomenon, probably rejects most of their own ideas rather quickly on empirical bases. We should instead adopt falsificationism as a relative notion—use it when comparing multiple theories. So, Le Verrier was ultimately wrong, but acted reasonably—he had a pretty good theory of mechanics so he worked to reconcile it with some problematic data. Had someone developed General Relativity in Le Verrier’s time, then it would have been unreasonable to insist that a hypothesized planet was a better explanation than an improved theory.

Returning to the hypothetical debate between the Critic and the Proponent, then, I think a reasonable albeit slightly rude response for the proponent would be “Well, do you have a better theory?”

References

References
1 Technically though, Le Verrier’s work on Mercury predated his work on Uranus
2 Though sometimes, theories which seem to be empirically idle end up being scientifically important (cf. non-Euclidean geometry)

Chris Collins interviews Noam Chomsky about formal semantics

Over on his blog, Chis Collins has posted the text of a conversation he had over email with Noam Chomsky on the topic of formal semantics. While Chomsky has been very open about his views on semantics for a long time, this interview is worth reading for working linguists because Collins frames the conversation around work by linguists—Heim & Kratzer, and Larson & Segal—rather than philosophers—Quine, Austin, Wittgenstein, Frege, et al.

You should read it for yourself, but I’d like to highlight one passage that jumped out at me. Of the current state of the field, Chomsky says:

Work in formal semantics has been some of the most exciting parts of the field in recent years, but it hasn’t been treated with the kind of critical analysis that other parts of syntax (including generative phonology) have been within generative grammar since its origins. Questions about explanatory power, simplicity, learnability, generality, evolvability, and so. More as a descriptive technology. That raises questions.

p 5. (emphasis mine)

It’s true that formal semantics today is a vibrant field. There’s always new analyses, The methods of formal semantics are being applied to new sets of data, and, indeed, it’s virtually impossible to even write a paper on syntax without a bit of formal semantics. Yet it is also true that almost no one has been thinking about the theory underpinning the analytical technology. As a result, I don’t think many working semanticists are even aware that there is such a theory, or if they are aware, they tend to wave their hands, saying “that’s philosophy”. Formal semanticists, it seems, have effectively gaslit themselves.

Chomsky’s framing here is interesting, too. He could be understood as suggesting that formal semantics could engage in theoretical inquiry while maintaining its vibrancy. It’s not clear that this is the case though. Currently, formal semantics bears a striking similarity to the machine-learning/neural-nets style of AI, in that both are being applied to a very wide array of “problems” but a closer look at the respective technologies very likely would cause us to question whether they should be. Obviously, the stakes are different—no one’s ever been injured in a car crash because they used lambdas to analyze a speech act—but the principle is the same.

But I digress. Collins and Chomsky’s conversation is interesting and very accessible to anyone who familiar with Heim & Kratzer-style semantics. It’s well worth a read.

Break up Big University; Create Jobs

This argument in this article (tweeted out by Shit Academics Say) is just designed to pit one group of workers (sessional lecturers) against another (tenured faculty). This is because it ignores the fact that the number of faculty positions at least in Canada is kept artificially low.

Notice that there is no mention whatsoever of class-sizes. Coming from UofT, I can tell you, class sizes have been out of control for years. My intro bio class so big that no lecture halls could house it. Lectures were in the 1730-seat Convocation Hall.

Convocation hall is a beautiful building but it is designed for ceremony, not pedagogy. There is no chalkboard or whiteboard, and if there were, the students in the upper balcony wouldn't be able to read them. What's more the seats have no writing surface for note-taking.

More recently, I taught a "general interest" linguistics course (a "bird course") so big that it also couldn't be housed in a proper lecture hall. Instead we had what was basically a movie theatre. The lights were perpetually dimmed, and again, no chalkboard.

These sorts of non-classrooms really only allow for one type of teaching style, possibly the worst type: A lecturer droning on about a slide deck.

Beyond just the lectures, it's quite impossible for all 1000+ students of such a class to have direct access to their professors in office hours. There aren't enough hours in the day.

(Of course, most students don't go to office hours. It might make a good action, though, for student unions to organize students to go to office hours en masse. Not to shout slogans at professors, but just to ask for help)

Clearly, UofT, the largest university in Canada, has reached its capacity of students.

Imagine, though, if we kept tenure and the researcher/teacher model of academia and put hard limits on class sizes. Say, 200 for 1st yr classes, 100 for 2nd yr and so on. How would that affect things?

The neoliberal response would probably be "well, you'd have to have fewer students, probably only well-off white students."
But there's another possibility: Expand the faculty size by creating new universities.

This could mean founding a brand new university, or it could mean splitting up oversized universities. UofT, for instance, has three campuses: Downtown, Scarborough, and Mississauga. Why not spin them off from each other?

There are definitely ways to do this that I haven't thought about, and none of them are perfect, and all of them require public funding. But that's true of any societal problem.

But we can't really expect to solve the problem without an adequate diagnosis of the problem's source.
There's no shortage of qualified educators, nor is there a shortage of people who want/need an education.The problem is infrastructure.

So, whenever someone makes an argument pitting workers against workers, it can only really serve to obscure the fact that the problem is elsewhere—with management, with bureaucracy, with politicians.

Originally tweeted by Dan Milway (@thrilway) on June 26, 2021.