Stuck in the Past? Scaling up Academia

Tags: academia, musings

Published on
« Previous post: Web of Science: A Web of Nonsense — Next post: Musings on Mobility: The Ping from Hell »

A re-occurring impression about academia is how certain processes are stuck in the past, never getting re-examined. For all the progressive views academia has, we academics tend to be unwilling to change or to adopt new processes, at least for a while. There are several parts of academia where this seems most obvious to me:

  1. Our peer review system.
  2. Our publication system.
  3. Our conferences.

All of these aspects of academia have one thing in common: they, presumably, worked well in the past, when conditions were different, but they stopped working reliably under present conditions since we passed their scaling limits.

Some Examples

For instance, peer review may actually work when (i) your community is not too large, (ii) you have a set of trustworthy referees, (iii) sufficient time is allotted to writing reviews, and (iv) you make the process transparent by getting referees to sign their reports. Peer review pretty much does not work reliably when you have a community of tens of thousands of scientists, pushing out a deluge of papers daily. New ideas are required because we passed the scaling limit a while ago here.

Likewise, the publication system must have worked better at some point. Now, with thousands of journals, and countless hours spent changing the formatting and whatnot just to receive a desk reject, I have my doubts about their utility. I have written a lot about how academic publishers are just meddling with science instead of contributing, so I am not going to repeat this here. But again, the publication system worked at least marginally better when there was a handful of journals for your specific discipline, with hand-picked editors carefully screening submissions, I imagine.1

Last, but certainly not least, conferences are a prime example of such scaling limits. In machine learning, we are cramming thousands of people into week-long conferences every year. While it is nice to see your colleagues, we are living in an age where intercontinental travel should not be undertaken frivolously; plus, this system is decidedly non-equitable in the sense that every year, there are visa issues for those of us with a less prestigious passport.2 Again, I imagine that this situation used to be different in the past, when your whole research field could probably fit into your random cafeteria.

Scale, scale, scale. It appears to be obvious that scale is an issue here. Just like your code might not scale to larger data sets, our present system appears not to scale well to its current size. Instead of refactoring, we are doing…not a lot, it seems.

Some Solutions?

Several solutions or proto-solutions have been proposed. One is to turn science into a ‘marketplace for ideas,’ thus getting rid of binary accept/reject decisions for publications, instead relying on a continuous discourse between reviewers, authors, and the rest of the world. If successful,3 this would change the business model of publishers, and force us to rethink our predominant ways of measuring the impact of science. In a similar vein, some conferences are successfully experimenting with hybrid options, relying on local meet-ups to rally the community. LoG, the ‘Learning on Graphs Conference,’ which I had the honour to serve as a programme chair for, was an example of this: by making everything available online, we lowered the participation barrier.

That’s just the beginning, and merely some small steps in some directions. We are not even sure whether this type of experimentation is the right thing to do. One thing is clear, though: the science of the 21st century and beyond cannot just rely on tools and thought processes from theĀ (early) 20th century.4

Here’s to change—for the better!


  1. Of course, I am viewing this through rose-coloured glasses, but the main point is not to glorify the past here but to boldly move into the future with something new. ↩︎

  2. Cue Fortunate Son here. ↩︎

  3. Personally, I am not sure whether a full ‘free-for-all’ is going to be more conducive for scientific dissemination, since it runs the risk of creating some kind of Matthew Effect. But this is maybe better saved for another article. ↩︎

  4. Again, I am happy to invoke Chesterton’s Fence here. We do not have to change everything just for the sake of changing it, but we should think more about the new tools we have at our disposal and how to fully leverage their potential. ↩︎