Piantadosi and MLMs again (I)

Last spring, Steven Piantadosi, professor of psychology and neuroscience, posted a paean to Modern Language Models (MLMs) entitled Modern language models refute Chomsky’s approach to language on LingBuzz. This triggered a wave of responses from linguists, including one from myself, pointing out the many ways that he was wrong. Recently, Prof. Piantadosi attached a postscript to his paper in which he responds to his critics. The responses are so shockingly bad, I felt I had to respond—at least to those that stem from my critiques—which I will do, spaced out across a few short posts.

In my critique, I brought up the problem of impossible languages, as did Moro et al. in their response. In addressing this critique, Prof. Piantadosi surprisingly begins with a brief diatribe against “poverty of the stimulus.” I say surprisingly, not because it’s surprising for an empiricist to mockingly invoke “poverty of stimulus” much in the same way as creationists mockingly ask why there are still apes if we evolved from them, but because poverty of stimulus is completely irrelevant to the problem of impossible languages and neither I nor Moro et al. even use the phrase “poverty of stimulus.”[1]For my part, I didn’t mention it because empiricists are generally quite assiduous in their refusal to understand poverty of stimulus arguments.

This irrelevancy expressed, Prof. Piantadosi moves on to a more on-point discussion. He argues that it would be wrong-headed for the constraints that would make some languages impossible to be encoded in our model from the start. Rather, if we start with an unconstrained model, we can discover the constraints naturally:

If you try to take constraints into account too early, you might have a harder time discovering the key pieces and dynamics, and could create a worse overall solution. For language specifically, what needs to be built in innately to explain the typology will interact in rich and complex ways with what can be learned, and what other pressures (e.g. communicative, social) shape the form of language. If we see a pattern and assume it is innate from the start, we may never discover these other forces because we will, mistakenly, think innateness explained everything

p36 (v6)

This makes a certain intuitive sense. The problem is that it’s refuted both by the history of generative syntax and the history of science more broadly.

In early theories, a constraint like “No mirroring transformations!” would have to be stated explicitly. Current theories, though, are much simpler with most constraints being derivable from the theory rather than tacked onto the theory.

A digression on scholarly responsibility: Your average engineer working on MLMs could be forgiven for not being up on the latest theories in generative syntax, but Piantadosi is an Associate Professor who has chosen to write a critique of generative syntax, so he really ought to know these things. In fact, he would only not know these thing by a conscious choice not to know or laziness.

Furthermore, the natural sciences have progressed thus far in precisely the opposite direction as what Piantadosi prescribes—they have started with highly constrained theories and progress has generally occurred when some constraint is questioned. Copernicus questioned the constraint that Earth stood still, Newton questioned the constraint that all action was local, Friedrich Wöhler questioned the constraint that organic and inorganic substances were inherently distinct.

None of this, of course, means that we couldn’t do science in the way that Piantadosi suggests—I think Feyerabend was correct that there is no singular Scientific Method—but the proof of the pudding is in the eating. Piantadosi is effectively making a promise that if we let MLM research run its course we will find new insights[2]He seems to contradict himself later on when he asserts that the “science” of MLMs may never be intelligible to humans. More on this in a later post. that we could not find had we stuck with the old direction of scientific progress, and he may be right—just as AGI may actually be 5 years away this time—but I’ll believe it when I see it.


After expressing his methodological objections to considering impossible languages, Piantdosi expresses skepticism as to the existence of impossible languages, stating ” More troubling, the idea of “impossible languages” has never actually been empirically justified.” (p37, v6) This is a truly astounding assertion on his part considering both Moro et al. and I explicitly cite experimental studies that arguable provide exactly the empirical justification that Piantadosi claims does not exist. Both studies cited present participants with two types of made-up languages—one which follows and one which violates the rules of language as theorized by generative syntax—and observes their responses as they try to learn the rules of the particular languages. The study I cite (Smith and Tsimpli 1995) compares the behavioural responses of a linguistic savant to those of neurotypical participants, while the studies cited by Moro et al. (Tettamanti et al., 2002; Musso et al., 2003) uses neuro-imaging techniques. Instead Prof. Piantadosi refers to every empiricists favourite straw-man argument—the alleged lack of embedding structures in Pirahã.

This bears repeating. Both Moro et al. and I expressly point to experimental evidence of impossible languages, and Piantadosi’s response is that no one has ever provided evidence of impossible languages.

So, either Prof. Piantadosi commented on mine and Moro et al‘s critiques without reading them, or he read them and deliberately misrepresented them. It is difficult to see how this could be the result of laziness or even willful ignorance rather than dishonesty.

I’ll leave off here, and return to some of Prof. Piantadosi’s responses to my critiques at a later time.

Notes

Notes
1 For my part, I didn’t mention it because empiricists are generally quite assiduous in their refusal to understand poverty of stimulus arguments.
2 He seems to contradict himself later on when he asserts that the “science” of MLMs may never be intelligible to humans. More on this in a later post.

Some good news on the publication front

Today I woke up to an email from the editor of Biolinguistics informing me that my manuscript “A parallel derivation theory of adjuncts” had been accepted for publication. I was quite relieved, especially since I had been expecting some news about my submission for a couple of days—the ability to monitor the progress of submissions on a journal’s website is a decidedly mixed blessing—and there was a definite possibility in my mind that it could have been rejected.

It was also a relief because it’s been a long road with this paper. I first wrote about the kernel of its central idea—that syntactic adjuncts were entirely separate objects from their “hosts”—in my thesis, and I presented it a couple of times within the University of Toronto Linguistics Department a few times. I first realized that it had some legs when it was accepted as a talk at the 2020 LSA Meeting in New Orleans, and I started working on it in earnest in the spring and summer of 2020, submitting the first manuscript version to a different journal in August 2020.

If you follow me on Twitter, you saw my reactions to the peer-review process in real time, but it’s worth summarizing. Versions of this manuscript underwent peer-review at multiple journals and in every case there were one or two constructive reviews—some positive reviews, and some negative reviews that nevertheless pointed out serious but fixable issues—but invariably there was one reviewer who was clearly hostile to the manuscript—there was often sarcasm and vague comments.

I’m sure the manuscript improved over the various submissions, but I believe that the main reason that the paper will finally be published is because the editor of Biolinguistics, Kleanthes Grohmann, recognized and agreed with me that one of the reviewers was being unreasonable, so I definitely owe him my gratitude.

There’s more edits to go, but you can look forward to seeing my paper in Biolinguistics in the near future.

Why are some ideas so sticky? A hypothesis

Anyone who has tried to articulate a new idea or criticize old ones may have noticed that some ideas are washed away relatively easily, while others seem to actively resist even the strongest challenges—some ideas are stickier than others. In some cases, there’s an obvious reason for this stickiness—in some cases there’s even a good reason for it. Some ideas are sticky because they’ve never really been interrogated. Some are sticky because there are powerful parts of society that depend on them. Some are sticky because they’re true, or close to true. But I’ve started to think there’s another reason an idea can be sticky—the amount of mental effort people put into understanding the idea as students.

Take, for instance, X-bar theory. I don’t think there’s some powerful cabal propping it up, it’s not old enough to just be taken for granted, and Chomsky’s Problems of Projection papers showed that it was not really tenable. Yet X-bar persists. Not just in how syntacticians draw trees, or how they informally talk about them, but I remember commentary on my definition of minimal search here involved puzzlement about why I didn’t simply formalize the idea that specifiers were invisible to search followed by more puzzlement when I explained that the notion of specifier was unformulable.

In my experience, the stickiness of X-bar theory—and syntactic projection/labels more broadly—doesn’t manifest itself in an attempt to rebut arguments against it, but in attempts to save it—to reconstitute it in a theory that doesn’t include it.[1]My reading of Zeijstra’s chapter in this volume is as one such attempt This is very strange behaviour—X-bar is a theoretical construct, it’s valid insofar as it is coherent and empirically useful. Why are syntacticians fighting for it? I wondered about this for a while and then I remembered my experience learning X-bar and teaching it—it’s a real challenge. It’s probably the first challenging theoretical construct that syntax students are exposed to. It tends to be presented as a fait accompli, so students just have to learn how it functions. As a result, those students who do manage to figure it out are proud of it and defend it like someone protecting their cherished possessions.[2]I think I may be describing “effort justification,” but I’m basing this just on the Wikipedia article

Of course, it’s a bit dangerous to speculate about the psychological motivations of others, but I’m certain I’ve had this reaction in the past when someone’s challenged an idea that I at one point struggled to learn. And I’ve heard students complain about the fact that every successive level of learning syntax starts with “everything you learned last year is wrong”—or at least that’s the sense they get. So, I have a feeling there’s at least a kernel of truth to my hypothesis. Now, how do I go about testing it?


Addendum

As I was writing this, I remembered something I frequently think when I’m preparing tests and exams that I’ve thus far only formulated as a somewhat snarky question:

How much of our current linguistic theory depends on how well it lends itself to constructing problem sets and exam questions?

Notes

Notes
1 My reading of Zeijstra’s chapter in this volume is as one such attempt
2 I think I may be describing “effort justification,” but I’m basing this just on the Wikipedia article

Bad omens for generative syntax

In the last few weeks there have been a couple of posts in the generative linguistics blogosphere that don’t bode well for the field.

The first is the sad announcement from Omer Preminger that he is leaving academia in order to live in the same town as his wife. This news is also rather shocking, since Preminger is a fairly prominent syntactician—someone whose work, though I didn’t always agree with it, had to be addressed seriously—and if a scholar of his prominence and ability can’t negotiate something as reasonable as a spousal hire, what hope does anyone else have in having a life and an academic career too. I’m just a sessional lecturer, so treating me like a robot is still the norm, but to hear that faculty members are also expected to be robots, is disconcerting to be sure.

Omer promises more reflections on his time in academia, which I will read with some interest when it comes out, but I am sorry to see him leaving academia.

The second concerning report comes from Chris Collins. Collins, it seems, applied to some of the same tenured/tenure-track jobs as me this past year, and got the same boilerplate rejection emails as me. That a tenured professor is in the same job market as me is not especially surprising, It should be surprising that no university wanted to hire him, since he not only has a fairly strong empirical program, but he’s made important contributions to syntactic theory—while the idea of label-free syntax is commonly attributed to Chomsky (2013; 2015), he cites Collins for the idea, and slightly more recently, Collins’ work with Ed Stabler formalizing minimalist syntax in a few ways predicted Chomsky’s most recent move to workspace-based MERGE, and on a personal note, has been an invaluable resource to my work.

Collins’ explanation of his unsuccessful applications is twofold and both parts suggest bad trends in generative syntax.

The first explanation is one that I gather is common across all academic fields[1]At least those fields that modern capitalism deems useless.—department budgets are too tight to hire a senior scholar like Collins, when junior candidates are available and cheaper. Collins is probably right on this, but unfortunately commenting on the last war. While it’s probably true that junior hires are preferred over senior hires for budgetary reasons, junior tenure-track faculty are not the floor. Why hire an expensive faculty member who you have to provide with an office, a small research budget, and a pension, when you can hire a few precarious adjuncts for cheaper?

As an aside of sorts, I remember having arguments in grad school with my fellow grad students about whether our department should hire tenured faculty away from other departments. The standing wisdom was that that was the trajectory—smaller departments hired junior faculty, and once they’d proved themselves they’d move on to bigger and better places, opening up a spot at their old place. There was a feeling that, sure, there was no growth in faculty positions, but they were at least going to replace faculty that left or retired. I was skeptical of that line. University administrators had adopted the neoliberal model almost entirely—The Market reigned supreme—and The Market was clear: Linguistics, along with the broader humanities, was useless, so why not take every opportunity to squeeze those useless departments, say, by delaying replacement hires.

All of this is to say that I think Collins has identified a trend, but not a new one. The lower echelons of academia have been enduring this trend for some time now. Perhaps now that it’s reaching the upper echelons, we can see about stopping or reversing it … perhaps.

Collins’ second explanation is that, while he has made valuable contributions in recent years, the field doesn’t appreciate those contributions, and I think he might add the qualifier “yet” to that assessment. Again, I think he’s correct, and he’s identified a trend that I first saw identified by Norbert Hornstein, namely that much of what we call “theoretical syntax” is actually empirical/analytical work. This trend, I think, has morphed first to the point where so-called theoretical syntacticians were puzzled by actual theoretical work, then to the point where they are hostile to it. I suspect Collins has been a victim—though he in no way frames himself as a victim—of this hostility.

So, while there is a decided difference in degree between these two career setbacks, I think they are both part of the same trend, a trend which has been affecting more marginalized and vulnerable parts of academia for some time. The fact that this trend is now directly affecting established generative syntacticians should make the field as a whole take notice. At least I hope it does.

Notes

Notes
1 At least those fields that modern capitalism deems useless.

Break up Big University; Create Jobs

This argument in this article (tweeted out by Shit Academics Say) is just designed to pit one group of workers (sessional lecturers) against another (tenured faculty). This is because it ignores the fact that the number of faculty positions at least in Canada is kept artificially low.

Notice that there is no mention whatsoever of class-sizes. Coming from UofT, I can tell you, class sizes have been out of control for years. My intro bio class so big that no lecture halls could house it. Lectures were in the 1730-seat Convocation Hall.

Convocation hall is a beautiful building but it is designed for ceremony, not pedagogy. There is no chalkboard or whiteboard, and if there were, the students in the upper balcony wouldn't be able to read them. What's more the seats have no writing surface for note-taking.

More recently, I taught a "general interest" linguistics course (a "bird course") so big that it also couldn't be housed in a proper lecture hall. Instead we had what was basically a movie theatre. The lights were perpetually dimmed, and again, no chalkboard.

These sorts of non-classrooms really only allow for one type of teaching style, possibly the worst type: A lecturer droning on about a slide deck.

Beyond just the lectures, it's quite impossible for all 1000+ students of such a class to have direct access to their professors in office hours. There aren't enough hours in the day.

(Of course, most students don't go to office hours. It might make a good action, though, for student unions to organize students to go to office hours en masse. Not to shout slogans at professors, but just to ask for help)

Clearly, UofT, the largest university in Canada, has reached its capacity of students.

Imagine, though, if we kept tenure and the researcher/teacher model of academia and put hard limits on class sizes. Say, 200 for 1st yr classes, 100 for 2nd yr and so on. How would that affect things?

The neoliberal response would probably be "well, you'd have to have fewer students, probably only well-off white students."
But there's another possibility: Expand the faculty size by creating new universities.

This could mean founding a brand new university, or it could mean splitting up oversized universities. UofT, for instance, has three campuses: Downtown, Scarborough, and Mississauga. Why not spin them off from each other?

There are definitely ways to do this that I haven't thought about, and none of them are perfect, and all of them require public funding. But that's true of any societal problem.

But we can't really expect to solve the problem without an adequate diagnosis of the problem's source.
There's no shortage of qualified educators, nor is there a shortage of people who want/need an education.The problem is infrastructure.

So, whenever someone makes an argument pitting workers against workers, it can only really serve to obscure the fact that the problem is elsewhere—with management, with bureaucracy, with politicians.

Originally tweeted by Dan Milway (@thrilway) on June 26, 2021.

Play, games, science and bureaucracy

In the titular essay of his 2015 book The Utopia of Rules, David Graeber argues for a distinction between play and games. Play, according to Graeber is free, creative, and open-ended, while games are rigid, repetitive, and closed-off. Play underlies art, science, conversation, and community, while games are the preferred method of bureaucracy. This idea really resonated with me, partially because I’m someone who doesn’t really like games, but also because I think it’s perfectly consonant with something I’ve written about previously: the distinction between theoretical and descriptive science. In this post, I’ll explore that intuition, and argue that theoretical scientific research tends to center play, while descriptive research tends to center games.

The key distinction between games and play, according to Graeber, is rules. While both are leisure activities done for sheer enjoyment, games are defined by their rules. These rules can be rather simple (e.g., checkers), fiendishly complex (e.g., Settlers of Catan), or something in between, but whatever they are, the rules make the game. What’s more, Graeber argues, it’s the rules that make games an enjoyable respite from the ambiguities of real life. At any given point in a game, there are a finite number of possible moves and a fixed objective. If only we had that same certainty when navigating interactions with neighbours, co-workers, and romantic interests!

If games are defined by their rules, then play is defined by it’s lack of rules. The best example—one used by Graeber—is that of children playing. There are no rules to how children play. In fact, as Graeber observes, a good portion of play between children involves negotiating the rules. What’s more, there’s no winning at play, only enjoyment. Play is open-ended—set a group of children to play and there’s no knowing what they’ll come up with.

Yet, play is not random. It follows principles—such as the innate social instincts of children—which are a different sort of thing from the rules that govern games. Rules must be explicit, determinate, and preferably compiled in some centralized place so that, in a well-designed game, disputes can be always be settled by consulting some authority, usually a rule-book. Principles are often implicit—no one teaches kids how to play—can be quite vague—a main principle of improv is “Listen”—and are arguably somehow internal—if there are principles to playing a musical instrument, they come from the laws of physics, the form and material of the instrument, and our own innate sense of melody, harmony, and rhythm.

As this description might suggest, rules and principles, games and play, are often in conflict with each other. Taking a playful activity, and turning it into a game can eliminate some of the enjoyment. Take, for instance, flirtation—a playful activity, for which an anthropologist might be able to discover some principles. People who treat flirtation as a game understandably tend to be judged as creepy. Understandably, because gaming assumes a determinate rules—if I do X in situation Y, then Z will happen—and no-one likes to be treated like a robot. Or, consider figures like Chelsea Manning or Edward Snowden. Each was faced with a conflict between the external rules of an institution and their internal principles, and chose the latter.

This conflict, however, need not be an overall negative. Any art form at any given time follows a number of rules and conventions that, at their best, defines the space in which an artist can play. Eventually, though, the rules and conventions of a given art form or genre become too fixed and end up stifling the playfulness of the artists. I remember my cousin, who was a cinema studies major in undergrad talk about watching Citizen Kane for a class. The students were confused—this is widely lauded as one of the greatest films ever made, but they couldn’t see what was so special. The instructor explained that Citizen Kane was groundbreaking when it came out, it broke all the rules, but it ended up replacing them with new ones. Now those new rules are so commonplace, that they are completely unremarkable. While I don’t think we could develop an entire theory of aesthetics based solely on the balance between games and play, the opposition seems to be active in how we judge art.

But what does this have to do with science? Well, thus far I’ve suggested that games are defined by external rules, while play is guided by internal principles. This contrast lines up quite nicely with Husserl’s definitions descriptive and theoretical sciences respectively. Descriptive sciences are sets of truths grouped together by some externally imposed categorization, while theoretical sciences are sets of truth which have an internal cohesion. If I’m on the right track, then descriptive sciences should share some characteristics with games, while theoretical sciences should share some with play.

Much as games impose rules on their participants, descriptive sciences impose methods on their researchers. Often times they are quite explicit about this. Noam Chomsky, for instance, often says of linguistics education in the mid-20th century, that it was almost exclusively devoted to learning and practicing elicitation procedures (read: methods). The cognitive revolution that Chomsky was at the center of changed this, allowing theory to take center-stage, but we are currently in the midst of a shift back towards method. Graduate students are now expected or even required to take courses in “experimental” or quantitative methods. Job ads for tenure-track positions are rarely simply for a phonologist, or a semanticist, but rather, they invariably ask for experience with quantitative analysis or experimental methods, etc.

The problem with this is that methods in science, like rules in games, serve to fence in possibilities. When you boil it down to its essences, a well run experiment or corpus study is nothing but an attempt to frame and answer a yes-or-no question. What’s more, each method is quite restricted as to what sort of questions it can even answer. Even the presentation of method-driven research tends to be rather rigidly formatted—experimental reports follow the IMRaD format, so do many observational studies, and grammars, the output of field methods, tend to start with the phonetics and end with the syntax/semantics. So when someone says they’re going to perform an eye-tracking study, or some linguistic fieldwork, you can be fairly certain as to what their results will look like, just like you can be certain of what a game of chess will look like.

Contrast this with theoretical work, which tends to start with sometimes horribly broad questions and often ends up somewhere no-one would have expected. So, asking what language is yielded results in computer science, inquiring about the motion of the planets led to a new understanding of tides, and asking about the nature of debt reveals profound truths about human society. No game could have these kinds of results—if you sat down to play Pandemic and ended up robbing a bank, it probably means you read the rules wrong at least. But theory is not like a game, it’s inherently playful.

Now anyone who has read any scientific theory might object to this, as the writing produced by theorists tends to be rather abstract and inaccessible, but writing theory is like retelling an fun conversation—the fun is found in the moment and can never be fully recreated. The playful nature of theory, I think, can be seen in two of the main criticisms leveled at theoretical thinkers by non-theorists: that theoreticians can’t make up their minds and that they just make it up as they go along. These criticisms, however, tend to crop up whenever there is serious theoretical progress being made. In fact, many advances in scientific theories are met with outright hostility by the scientific community (see, atomic theory, relativity, the theory of grammar, etc), likely, i think, because a new theory tends to invalidate a good portion of what the contemporary community spend years, decades, or centuries, getting accustomed to, or worse yet, a theoretical advance might appear to render certain empirical results irrelevant or even meaningless.

Compare this to children playing. If children make up some rules while playing, only a fool would take those to be set in stone. Almost certainly, the children would come up against those rules and decide to toss them by the wayside.

As I mentioned, Graeber discusses games and play as a way of analyzing bureaucracy and our relationship to it. Bureaucracy—be it in government, corporations, or academia—is about creating games that aren’t fun. They are also impersonal power structures, what Hannah Arendt calls “rule by no-one”. And just as games are, bureaucracies are designed to hem in playfulness, because allowing people to be playful might lead them to realize that a better life is possible without those bureaucracies.

Within science, too, we can see bureaucracies being aligned with strictly methodical empirical work and somewhat hostile to theoretical work. We can see this in how the respective researcher organize themselves. Empirical work is done in a lab, which does not just refer to a physical space, but to a hierarchical organization with a PI (primary investigator), supervising and directing post-docs and grad students, who often in turn supervise and direct undergraduate research assistants—a chief executive, middle management, and workers. Theoretical work, on the other hand, is done in a wide array of spontaneously organized affinity groups. So, for instance, neither the Vienna Circle, in philosophy, nor the Bourbaki group, in mathematics, had any particular hierarchical structure and both were quite broad in their interests.

The distinction can even be seen in how theoretical and descriptive sciences interact with time and space. Experimental work must be done in specially designed rooms, sometimes made just for that one experiment, and observational work must be done in the natural habitat of the phenomena to be observed—just as a chess game must be limited to an 8×8 grid. Theoretical work, can be done almost anywhere: in a cafe, a bar, on a train, in a dark basement, or spacious detached house. The less specialized the better. In fact, the only limiting factor is the theorist themself. As for time, nowhere is this clearer than in the timelines given by PhD candidates in their thesis proposal. While not all games are on a clock, all games must account for all of their time—each moment of a game has a purpose. This is what a timeline for a descriptive project looks like: “Next month I’ll travel to place X where I’ll conduct Y hours of interviews. The following month I will organize and code the data…” and so on. It’s impossible to provide such detail in the plan for a theoretical work for several related reasons: The time spent working tends to be unstructured. You never know when inspiration or some kind of moment of clarity will strike. You can’t possibly know what the next step is until you complete the current step. and so on. Certainly, the playful work of theory can sometimes benefit from some structure, but descriptive work, like a game, absolutely depends on structured time and space.

This alignment can also be seen with how theory and method interact with the superstructures of scientific research, that is, the funding apparatuses—granting agencies and corporations. Both sorts of structures are bureaucratic and tend to be structurally opposed to theoretical (read: playful) work. In both cases, funders must evaluate a bunch of proposals and choose to fund those that are most likely to yield a significant result. Suppose you’re a grant evaluator and you have two proposals in front of you: Proposal A is to do linguistic fieldwork on some understudied and endangered language focusing on some possibly interesting aspect of that language, and Proposal B is to attempt to reconcile two seemingly contradictory theoretical frameworks. Assuming each researcher is eminently qualified to carry out their respective plans, which would you fund? Proposal A is all but guaranteed to have some results—they may be underwhelming, but they could be breakthroughs (though this is very unlikely)—a guarantee that’s implicit in the method—It’s always worked before. If Proposal B is successful, it is all but guaranteed to be a major breakthrough, however there is absolutely no guarantee that it will be successful—if the researcher cannot reconcile the two frameworks, then we cannot draw any particular conclusion from it. So which one do you choose? The guarantee, or the conditional guarantee? The conditional guarantee is a gamble, and bureaucrats aren’t supposed to gamble, so we go with the guarantee.

So, bureaucratic funding structures are more inclined to fund methods-based research, that’s fine as far as it goes—theoretical research is dirt cheap, only requiring a nourished mind and some writing materials—but grants aren’t just about the money. Today, grants are used as a metric for research capability. If you can get a lot of grants, then you must be a good researcher. Set aside the fact that virtually any academic will tell you that grant-writing is a particular skill that isn’t directly related to research ability, or that many researchers delegate grant-writing to their post-docs, the logic here is particularly twisted: Granting agencies use past grants as an indication of a good researcher, so do hiring committees. This makes sense—previous success in a process is a good indicator of future success—provided everything stays more or less the same. Thus the grant system and other bureaucratic systems are likely to defend the status quo, by funding descriptive rather than theoretical work.

If my analysis is correct, then the sciences are being held back by the bureaucracies that are supposed to enable them such as university administration and funding agencies. They’re also held back by their own mythology—the “scientific method”—which promises breakthroughs if only they keep playing the game. This should not be too surprising to anyone who considers how bureaucracies hold them back in their day-to-day lives. What’s frustrating about this though, is that academia, more than any sector of modern society, is supposed to be self-organized. University administrators (Deans, Presidents, Provosts, etc.) are supposed to be drawn from the faculty of that university, and funding organizations are supposed to be run by researchers. So, unlike the bureaucracies the demean the poor and outsource jobs, the bureaucracies that stifle academics are self-imposed. The positive side of this is that, if academics wanted to, they could dismantle many of their bureaucracies tomorrow.