If At First You Don’t Succeed… — Counting Paper Rejections
One of the staples of academic life is that your work will be rejected. Thanks to modern peer review, the path to publishing your work in an ‘official’ venue—whatever that might mean for your chosen scientific field—is fraught with obstacles, the primary one being the reviewers. In machine learning, where fast-paced conferences are de rigueur, this is no different. While there is a little bit of data on where rejected submissions to NeurIPS, one of our flagship conferences, eventually end up, few academics appear to be talking or writing openly about the toil and trouble concerning their papers. I want this to change. Here then, in the spirit of full transparency, are most of my conference submissions since about 2018, focusing on computational-biology and machine-learning venues (if lists are not your thing, scroll a little bit to read about my thoughts on these rejections).
- Association Mapping in Biomedical Time Series via Statistically Significant Shapelet Mining: accepted at ISMB 2018
- Neural Persistence: A Complexity Measure for Deep Neural Networks Using Algebraic Topology: rejected from NeurIPS 2018, accepted at ICLR 2019
- A Persistent Weisfeiler-Lehman Procedure for Graph Classification: accepted at ICML 2019
- Early Recognition of Sepsis with Gaussian Process Temporal Convolutional Networks and Dynamic Time Warping: accepted at MLHC 2019
- A Wasserstein Subsequence Kernel for Time Series: rejected from KDD 2019, accepted at ICDM 2019
- Wasserstein Weisfeiler-Lehman Graph Kernels: accepted at NeurIPS 2019
- Topological and Kernel-Based Microbial Phenotype Prediction From MALDI-TOF Mass Spectra: accepted at ISMB 2020
- Graph Filtration Learning: rejected from NeurIPS 2019, accepted at ICML 2020
- Set Functions for Time Series: rejected from ICLR 2020, accepted at ICML 2020
- Topological Autoencoders: rejected from NeurIPS 2019, rejected from ICLR 2020, accepted at ICML 2020
- Uncovering the Topology of Time-Varying fMRI Data Using Cubical Persistence: accepted at NeurIPS 2020
- Filtration Curves for Graph Representation: accepted at KDD 2021
- Topological Graph Neural Networks: rejected from ICML 2021, rejected from NeurIPS 2021, accepted at ICLR 2022
- Evaluation Metrics for Graph Generative Models: Problems, Pitfalls, and Practical Solutions: rejected from NeurIPS 2021, accepted at ICLR 2022
- Capturing Shape Information with Multi-scale Topological Loss Terms for 3D Reconstruction: accepted at MICCAI 2022
- Diffusion Curvature for Estimating Local Curvature in High Dimensional Data: accepted at NeurIPS 2022
- On Measuring Excess Capacity in Neural Networks: rejected from ICML 2022, accepted at NeurIPS 2022
- Ollivier-Ricci Curvature for Hypergraphs: A Unified Framework: accepted at ICLR 2023
- Topological Singularity Detection at Multiple Scales: rejected from ICLR 2022, accepted at ICML 2023
- Curvature Filtrations for Graph Generative Model Evaluation: rejected from ICML 2023, accepted at NeurIPS 2023
- Differentiable Euler Characteristic Transforms for Shape Classification: accepted at ICLR 2024
- Simplicial Representation Learning with Neural $k$-Forms: accepted at ICLR 2024
- Mapping the Multiverse of Latent Representations: accepted at ICML 2024
- Position: Topological Deep Learning is the New Frontier for Relational Learning: accepted at ICML 2024
- Metric Space Magnitude for Evaluating the Diversity of Latent Representations: rejected from ICLR 2024, rejected from ICML 2024, accepted at NeurIPS 2024
- On the Expressivity of Persistent Homology in Graph Learning: rejected from NeurIPS 2023, rejected from NeurIPS 2024, accepted at LoG 2024
- CliquePH: Higher-Order Information for Graph Neural Networks through Persistent Homology on Clique Graphs: rejected from KDD 2024, rejected from NeurIPS 2024, accepted at LoG 2024
- Bayesian Computation Meets Topology: rejected from NeurIPS 2023, rejected from ICLR 2024, rejected from ICML 2024, accepted at TMLR
- MAGNet: Motif-Agnostic Generation of Molecules from Scaffolds: rejected from NeurIPS 2023, rejected from ICLR 2024, rejected from ICML 2024, rejected from NeurIPS 2024, accepted at ICLR 2025
- MANTRA: The Manifold Triangulations Assemblage: accepted at ICLR 2025
This list is quite detailed, but a histogram will help:
I am somewhat surprised to see that most papers get in on the first try. This is certainly not what it feels like—I would have expected at least one round of rejections. The other remarkable, but not altogether unexpected, thing about this histogram is that it is long-tailed. After two rejections, papers are typically accepted—cue Kevin Flynn saying ‘And then, one day I got in…’ There are only a few papers that need some more tries. Let me focus on those a little bit.
The paper with the most rejections1 is MAGNet: Motif-Agnostic
Generation of Molecules from
Scaffolds. I could dive
into the harrowing details of each rejection—some of which appeared to
be quite unfair—but the facts of the matter are that, apart from some
wordsmithing, rewriting, and reordering the content, the basic idea
has remained the same since day one. Despite this, the paper has now been accepted
as a spotlight presentation at ICLR, an accolade only bestowed upon the
top 5% of all submissions. Another paper, Evaluation Metrics for Graph Generative Models: Problems, Pitfalls,
and Practical Solutions,
had a similar journey, albeit with fewer rejections. Beauty, it seems,
is very much in the eye of the beholder reviewer.
That is of course not to say that papers did not change after a rejection, quite the contrary. In one case, what would later become Metric Space Magnitude for Evaluating the Diversity of Latent Representations, we even pivoted from an earlier line of inquiry because we found it hard to build a case for our method based on an earlier set of experiments. In this sense, the discussions with reviewers definitely helped make our idea(s) shine. While I this is not exactly a case of ‘iron sharpens iron,’ I believe that the steely comments of reviewers often helped us rebuild the more brittle parts of our arguments and papers, with the core idea remaining the same.2
However, the main moral of this data is that persistence is key. If you give up on your work, it will never be published—if you, as its creator, do not have faith in it, how can anyone else? Do not mistake persistence for complacency, though. It always paid off to revise the work in light of the comments by reviewers—in particular the ones that stung at first.3 For all the lamentations I could make about peer review being broken or, if not dead, then at least sleeping on the wheel, there is value in having some noisy gradient information about your work come back to you, because in the end, you want someone to read it and, ideally, build on it. To paraphrase Howard H. Aiken: ‘If your ideas are any good, you’ll have to ram them down people’s throats.’ Sometimes, it just needs a few attempts.4
Here’s to more rejections—until next time!
-
So far, at least. ↩︎
-
I believe that every paper needs a core idea, a strong foundation on which to build the rest. If that is in place, my overly-confident inner narcissist tells me that things are going to be all right. ↩︎
-
I firmly believe that we should teach reviewers to give actionable feedback while making sure that the tonality is right. ↩︎
-
The same goes for grants, but the feedback culture is quite different here, so this needs to be the subject of a future article ↩︎