If At First You Don’t Succeed… — Counting Paper Rejections

Tags: academia, research

Published on
« Previous post: The Time of Monsters

One of the staples of academic life is that your work will be rejected. Thanks to modern peer review, the path to publishing your work in an ‘official’ venue—whatever that might mean for your chosen scientific field—is fraught with obstacles, the primary one being the reviewers. In machine learning, where fast-paced conferences are de rigueur, this is no different. While there is a little bit of data on where rejected submissions to NeurIPS, one of our flagship conferences, eventually end up, few academics appear to be talking or writing openly about the toil and trouble concerning their papers. I want this to change. Here then, in the spirit of full transparency, are most of my conference submissions since about 2018, focusing on computational-biology and machine-learning venues (if lists are not your thing, scroll a little bit to read about my thoughts on these rejections).

This list is quite detailed, but a histogram will help:

A histogram of paper rejections

I am somewhat surprised to see that most papers get in on the first try. This is certainly not what it feels like—I would have expected at least one round of rejections. The other remarkable, but not altogether unexpected, thing about this histogram is that it is long-tailed. After two rejections, papers are typically accepted—cue Kevin Flynn saying ‘And then, one day I got in…’ There are only a few papers that need some more tries. Let me focus on those a little bit.

The paper with the most rejections1 is MAGNet: Motif-Agnostic Generation of Molecules from Scaffolds. I could dive into the harrowing details of each rejection—some of which appeared to be quite unfair—but the facts of the matter are that, apart from some wordsmithing, rewriting, and reordering the content, the basic idea has remained the same since day one. Despite this, the paper has now been accepted as a spotlight presentation at ICLR, an accolade only bestowed upon the top 5% of all submissions. Another paper, Evaluation Metrics for Graph Generative Models: Problems, Pitfalls, and Practical Solutions, had a similar journey, albeit with fewer rejections. Beauty, it seems, is very much in the eye of the beholder reviewer.

That is of course not to say that papers did not change after a rejection, quite the contrary. In one case, what would later become Metric Space Magnitude for Evaluating the Diversity of Latent Representations, we even pivoted from an earlier line of inquiry because we found it hard to build a case for our method based on an earlier set of experiments. In this sense, the discussions with reviewers definitely helped make our idea(s) shine. While I this is not exactly a case of ‘iron sharpens iron,’ I believe that the steely comments of reviewers often helped us rebuild the more brittle parts of our arguments and papers, with the core idea remaining the same.2

However, the main moral of this data is that persistence is key. If you give up on your work, it will never be published—if you, as its creator, do not have faith in it, how can anyone else? Do not mistake persistence for complacency, though. It always paid off to revise the work in light of the comments by reviewers—in particular the ones that stung at first.3 For all the lamentations I could make about peer review being broken or, if not dead, then at least sleeping on the wheel, there is value in having some noisy gradient information about your work come back to you, because in the end, you want someone to read it and, ideally, build on it. To paraphrase Howard H. Aiken: ‘If your ideas are any good, you’ll have to ram them down people’s throats.’ Sometimes, it just needs a few attempts.4

Here’s to more rejections—until next time!


  1. So far, at least. ↩︎

  2. I believe that every paper needs a core idea, a strong foundation on which to build the rest. If that is in place, my overly-confident inner narcissist tells me that things are going to be all right. ↩︎

  3. I firmly believe that we should teach reviewers to give actionable feedback while making sure that the tonality is right. ↩︎

  4. The same goes for grants, but the feedback culture is quite different here, so this needs to be the subject of a future article ↩︎