John Tennant recently wrote an excellent article about The state of the art in peer review and Ugo Bardi commented So, You Think Science Will Save the World? Are You Sure? (… I love the “Armata Brancaleone”!).
My thoughts on the debate:
1. It is possible to write a perfectly wrong paper in good faith. It can be based on good logical arguments, good data analysis, and be perfectly reviewed. It just happens to be wrong. Sometimes it is simply statistics (at 5% statistical significance, every twentieth scientific paper will make a claim it actually has not proven). Sometimes it is bad luck or a stupid error. Sometimes it is a perfectly reasonable conclusion based on good data, except it later turns out that an alternative explanation was not considered. There is NO way that a review process can always lead to perfect scientific results.
2. That is a good thing. All science is based on preliminary data, preliminary analysis, and preliminary understanding. We gain scientific knowledge only if multiple research groups are independently able to confirm (or refute) a result. It is important that no-one believes a scientific result to be a final truth, just because it was published in a good, peer-reviewed scientific journal.
3. Conversely, it is not unheard of that a brilliant break-through paper repeatedly failed to be published, because it contradicted some ingrained assumptions of the current scientific orthodoxy (see e.g. Fiona MacDonald 2016. 8 Scientific Papers That Were Rejected Before Going on to Win a Nobel Prize. It is important that unorthodox articles keep a publication chance (OK if lower than average). They may be the ones to be awarded a Nobel price decades later. Or just junk.
4. Thus, statistically speaking, the process needs to be balanced between type I and type II errors. There is no perfect solution. The process which we have is one of increasing the signal while reducing the noise. It is not a methodology to obtain truth.
5. A poor or even malicious review will not break an article. A poor journal editor may. The editor has to read the reviews and gain an understanding about discrepancies that goes beyond looking at the proposed action. If she or he is unconvinced by either a positive or negative review, she or he must request yet another opinion.
6. I agree that a a lot can be improved with the review process. My priority is a system independent of specific journals that supports continuous peer review after publication. It would make all later comments, documentations of suspected or proven errors easily discoverable and fully connected with any article.
7. However, I think we do need to keep the option of anonymous peer review in the system. I find it interesting to look at Wikipedia: It thrives on anonymity. Despite all the drawbacks associated with that and despite repeated attempts to replace Wikipedia with a system involving full authorship, the anonymous system is the one which works. There is wisdom in anonymity. For one, sometimes it is justified to harshly criticize the work of a friend. Also, no reviewer is in a position to fully objectively assess a manuscript. Very few reviewers will go to the extent of re-analyzing all data, run all models independently, proof-read all model computer code (preferably write your own and compare the results…) to make sure that no errors were made. Reviewers will make a best-effort-attempt to find errors, but they simply cannot vouch for correctness. Maybe at some point in an culture that understands that errors are inevitable, it is an improvement to review non-anonymously. At the moment I find that publicly underwriting a review is a danger to ones reputation. After all, as a reviewer you have perhaps 1/100th of the time of the author, and an inverse potential of making an error.