Thoughts on pre- vs. post-publication peer-review

• Author: Christophe Dessimoz •

A few months ago, we published a paper that spent four years in peer-review (story behind the paper). Because of this, I feel entitled to an opinion on the pre- vs post-publication review debate.

Background on preprints and their effect on peer-review

If you have been living under a rock, or if you are not on Twitter, you may not have noticed that preprints are becoming more widely accepted in biology—supported by initiatives such as Haldane’s Sieve and bioRxiv. This is particularly true in population genetics, evolutionary biology, bioinformatics, and genomics. Typically, a manuscript is made available as preprint just as it is submitted to a scientific journal, and therefore prior to peer-review. I am saying “made available” instead of “published” because although preprints can be read by anybody, the general view is that the canonical publication event lies with the journal, post peer-review. Because of this, many traditional journals tolerate this practice: peer-review technically remains “pre-publication” and the journals get to keep their gatekeeping function.

The key benefit of preprints is that they accelerate scientific communication. Indeed, peer-review can be long and frustrating for authors. Reviewers sometimes misjudge the importance of papers or request unreasonable amounts of additional work. The ability to bypass peer-review can thus be liberating for authors. Thus, if we instead recognised preprints as the canonical publication event, so goes the idea, peer-review would be relegated to a secondary role and journals would loose their gatekeeping function. This is the “post-publication” peer-review model.

For more background info, here are a few pointers:

Advantages of pre- and post-publication peer-review

What did our recent experience teach us? Spending four years in various stage of peer-review is a huge strain on the authors, reviewers, and editors. On the positive side, the final paper was more complete (some of the methods tested were published after our first submission!). Undoubtedly, it became a clearer and more solid paper. However, as I pointed out in my post on the paper, our main conclusions did not change. They could have been brought to everyone’s attention four years earlier.

So should pre-publication peer-review be abolished? In this particular case, it’s debatable. If we had known what awaited us, we would have released the manuscript as a preprint (eg on arXiv, bioRxiv, or PeerJ PrePrint)—something we have done with subsequent pieces of work.

However in general, I still think that pre-publication peer-review has many merits. First, thankfully this experience was extreme; on average things are much faster: 2-4 months including one revision cycle is quite typical in my experience. With some journals, this can be even faster (Bioinformatics, PeerJ or MBE jump to mind). Second, pre-publication peer-review can identify flaws or interesting points overlooked by the authors—to the point that in some cases (large multi-author studies!), peer-reviewers wind up contributing almost certainly more to the paper than some of the co-authors. Furthermore, while reviewers do not always agree in their comments, when they do, the authors better pay attention.

That being said, unorthodox or controversial results can be extremely difficult to publish under the pre-publication peer-review model, particularly if some of the reviewers have vested interests in the status quo.

Best practices at the age of pre- and post-publication peer-review

So what model am I arguing for? I think the emerging combination of preprints and journals can give us the best of the two worlds. Preprints ensure that advances can be quickly, broadly, and unimpededly disseminated. Journals add a layer of quality control and differentiation, and even glamour if so they choose. Importantly, this new paradigm shifts power from the publishers back to us, the researchers. And you all remember what comes with great powers, right?

As peer-reviewers, it is our job to identify specific issues with the work, and bring them to the authors and the editor, but ultimately, we should remember that the work we review is not our work. If the authors choose to ignore points we consider important, it may be more constructive and rewarding to write a rebuttal paper anyway. Post-publication peer-review as it were!

As editors, we should pay attention to potential conflicts of interests, focus on a limited set of key points that need addressing, and remember that every additional round of revisions costs precious time and resources. The additional delay could result in wasteful duplication of the work by others, or missed opportunities to build upon the findings. Thus we have a moral obligation to balance pre- and post-publication peer-review. Too often, editors lazily or cowardly repeatedly forward all reviewer comments back and forth without taking a stance, with little consideration of the burden this incurs to the authors and the rest of the community.

As authors, one simple but powerful thing we can do is to more openly acknowledge the shortcomings of our work and candidly disclose unresolved issues. In case of fundamental disagreeement with a peer-reviewer, the impasse may be overcome by including an account of the disagrement as part of the paper. In fact, this is precisely what we ended up doing in our paper. In the discussion section, we wrote:

And sixth, we disclose that in spite of the several lines of evidence and numerous controls provided in this study, one anonymous referee remained skeptical of our conclusions. His/her arguments were: (i) instead of using default parameters or globally optimized ones, filtering parameters should be adjusted for each data set; (ii) the observations that, in some cases, phylogenies reconstructed using a least-squares distance method were more accurate than phylogenies reconstructed using a maximum likelihood method (Supplementary Figs. 7–10 available on Dryad at http://dx.doi.org/10.5061/dryad.pc5j0), and that ClustalW performed “surprisingly well” compared with other aligners, are indicative that the data sets used for the species discordance test are flawed; (iii) the parsimony criterion underlying the minimum duplication test and the Ensembl analyses is questionable.

Indeed, not every issue can be resolved during peer-review. At some point, the debate should happen in the open. Any one single paper is rarely the “last word” on a question anyway. And as our editor admitted, a bit of controversy is good for the journal.

 

Reference

Tan G, Muffato M, Ledergerber C, Herrero J, Goldman N, Gil M, & Dessimoz C (2015). Current Methods for Automated Filtering of Multiple Sequence Alignments Frequently Worsen Single-Gene Phylogenetic Inference. Systematic biology, 64 (5), 778-91 PMID: 26031838

Share or comment:

To be informed of future posts, sign up to the low-volume blog mailing-list, subscribe to the blog's RSS feed, or follow us on Twitter. To read old posts, check out the index here.


Creative Commons
                    License The Dessimoz Lab blog is licensed under a Creative Commons Attribution 4.0 International License.