Open Peer Review Considered Harmful?

Tags: academia, musings

Published on
« Previous post: Musings on Mobility: The Ping from Hell — Next post: Rewiewing is a Contract »

Peer review, one of the staples of modern research, has a somewhat weird reputation: on the one hand, society recognises that there should be a way to assess the correctness of scientific claims.1 On the other hand, researchers are loath to relinquish control to their peers, who, typically, provide anonymous feedback. This latter opinion is typically justified by explaining that the incentives2 are not in favour of the authors here: the default position for many journals and many conferences seems to be ‘reject unless proven to be sufficiently good,’ meaning that it is much easier for reviewers to suggest rejection than to engage with the content in a meaningful manner.

How to (Maybe) Avoid Low-Quality Reviews

Many things have been suggested to improve review quality. For instance, when chairing the Learning on Graphs Conference, we provided monetary awards for the best reviewers of the conference.3 Another suggestion involves forcing reviewers to sign their reviews, but this is controversial because it also breaks anonymity—and we are in the ‘bad incentives’ territory again. Next to doing away with peer review in the form of accept/reject decisions entirely, as some new journals are trying, the machine learning community has adopted a commitment to transparency, making more and more parts of the review process open to scrutiny.

This quest for transparency is best embodied in OpenReview, our premier platform for handling the full review process of large-scale machine learning conferences. OpenReview lets you create a venue for your event, assign reviewers, handle submissions, and much more. True to its form, OpenReview permits you to control the visibility of virtually all aspects of your conference. Whether it is the visibility of all submissions, abstracts, reviews, or anything else—OpenReview has you covered.

Is Open Always Better?

I used OpenReview for quite a few workshops and conferences now, and I must admit that I never questioned the utility of releasing reviews afterwards. For one thing, I thought that it would provide some additional accountability for reviewers. Moreover, I also believed that high-quality reviews could serve as a beacon to attract reviewers and authors in subsequent editions of the conference.

One thing I did not consider, though, is whether there was a detrimental effect on junior researchers.4 A colleague recently pointed out that they consider publicly-available negative reviews to be very problematic because junior researchers would always be reminded about these rejections, and may see this as a type of ‘public shaming,’ in particular when they are coming from a non-traditional background or are part of a minority group.

This was an unexpected opinion for me, which is why I started thinking more deeply about it.

I had always believed that open systems were naturally better in various ways; I had always believed that ‘perfect transparency casts out fear.’ At some point, however, it occurred to me I had made one crucial mistake: when I started my Ph.D., I worked in a different field than machine learning. Making reviews public was unheard of, and even preprints were considered problematic; in fact, it was suggested multiple times to me that I should share only the absolute minimum about my research to ensure that no one would steal my ideas. As someone that frequently has their best ideas in discussions, this sentiment did not sit well with me. Hence, when moving my research focus to machine learning in 2018, the transparency was like a breath of fresh air for me. Open discussions and preprints were the norm, not the exception. Having already some experience with peer reviews, I enjoyed the openness much more.

Are We Doing the Right Thing?

I thus recognised that for me, the impact of a kind of ‘forced transparency’ was rather limited, thanks to my relatively privileged position of having received a fair share of paper decisions already (both positive ones and negative ones). But my colleagues’ comments resulted in me questioning some fundamental beliefs.

With a type of asymmetric transparency—reviews being anonymous but publicly available, while author identities are typically revealed regardless of the outcome of the review process—being prevalent in some parts of academia, I wonder what we are subjecting our new students to here. Is this really beneficial for scientific progress, or are we just perpetuating an old system because we have no better ideas?5

An even more chilling thought: are we missing out on critical voices because they do not feel welcome in our community? I have no right answers to this, but I will be more considerate in the future and aim to sign my own reviews more often. I believe that lifting that specific type of veil could be beneficial and might actually improve accountability. If anyone can read why someone endorsed—or did not endorse—a specific paper, one would hope that this introduces more accountability into the review process.

I also learned to question my beliefs (even) more often, and get more feedback about the setup of our various publication systems. I still strongly believe that open communication between authors and reviewers is vital for science to thrive—whether everything has to be revealed to a wider audience, though, is a different matter.

Until next time, here’s to improving academia step by step!


  1. The pandemic has shown that certain parts of society might be unwilling to assign more weight to the opinion of experts (provided they are talking about their own topics), but the age of experts is not fully over yet, I hope. I do not want to get sucked into this mire, so please do not pick a fight with me. My stance on experts versus non-experts can be summarised mostly in Bayesian terms: given an expert opinion on a topic they know well, there is a high likelihood that the opinion is ‘correct’ or ‘factual,’ insofar as one can assign these properties to an opinion, but of course experts are not infallible—and the real world often required complex, nuanced trade-offs. ↩︎

  2. What comes next is somewhat of a straw-man argument, but I just dislike the general focus on incentives, so I need to be a little bit rhetorical here—sorry! Here we go: notice that incentives are always magically pulled out of a hat since no one seems to want to do their job because it is the right thing to do. I do not have to be incentivised to do research, I do it because it is awesome, and I do it to the best of my abilities. No amount of money would incentivise me to do it better (but if my employers are reading this: of course I should get a raise). ↩︎

  3. This procedure was suggested by the organising team, who worked hard at getting sponsors. I am not aware of any other conference providing such rewards. ↩︎

  4. I specifically mean people starting out in research, for instance freshly-minted Ph.D. students. By the idiosyncratic and inscrutable rules of academia, I also still count as an ‘early-career researcher,’ even though I managed to acquire my academic carrot, i.e. my Ph.D., after years of getting the academic stick. ↩︎

  5. The same question applies to various other aspects of academia, of course. ↩︎