You are here

Where next in peer review? Part 1: COPE commentary

We have become used to speaking about scholarly peer review with some scepticism. Critics note that it is subjective and therefore inconsistent, it can be slow, it tends to down-weight negative results, and it is increasingly susceptible to manipulation. Richard Smith, former editor of the BMJ has described it as only ‘the least worst [system] we have’ and likened the outcomes to spinning a roulette wheel. He also reports that an earlier editor half-jokingly proposed throwing all the papers they received down the stairs and publishing those that reached the bottom. It seemed that we were doomed to live with a system which we simultaneously despair of, and can’t imagine doing without. But in July 2023, we saw some interesting signs that there may be new life in peer review after all.

A revolution in peer review?

Covid-19 saw a surge in rapid, community-led peer review because of the urgent need to understand more about the virus. Unfortunately, a number of the resulting articles later had to be retracted and this damaged faith in community-driven rapid review. This time, however, we seem so far to have avoided these reservations. 

The story started on 28 July when a research group from South Korea announced on the ArXiv preprint server that they had found a material which acted as a superconductor at room temperature [1, 2]. ArXiv saw a surge of activity as did the popular press, scholarly fora, and social media. Scientists from all over the world set up experiments to review and replicate the method, leading to perhaps the most intensely discussed story in physics since the discovery of the Higgs boson particle. ArXiv has logged almost 60 trackback links from the original papers at the time of writing, and two revisions to the original paper have been uploaded to the server. The energy and activity spurred by the two original papers was remarkable, and it revealed several things about nature and possibilities of peer review.

First, that a genuinely constructive, critical and spontaneous system of review by peers is possible. Publishers like Frontiers use collaborative review, but it is centrally organised. Sites like PubPeer and Twitter/X attract unsolicited comments on new papers, but they are not generally at the level of detail seen here. In the case of the superconductor experiment the results were so exciting that other researchers were moved to scrutinise them in very great detail. The method reported in the papers was also unusually replicable in labs around the world. Other physicists and engineers were therefore able to test the integrity of the original experiment – and did in fact find some potential flaws.

Back to top

Second, the speed of the response to the papers was aided by the ease of modern information sharing. The news spread globally in a matter of hours, very far from the experience of most preprints, and certainly from the 17 weeks (on average, and with considerable variation by discipline and journal impact factor) it can take for the traditional peer review process to be completed. This encouraged the authors in turn to respond swiftly with further revisions.

Third, that although peer review has its origins in traditional journals, it is now adapting to a variety of new forms of scholarly publishing. Preprint servers have been a feature of some academic disciplines for several decades (ArXiv began in 1991), but they are now part of an expanding range of innovations, some of which are outlined below. These have in turn driven further innovation in practices like open review across the scholarly publishing ecosystem. On ArXiv, for example, the identities of both author and reviewers/commenters are public. In this case, then, the platform arguably shaped the form and the spontaneity of the review process, while the commenting model has encouraged transparency and a sense of community.

Fourth, the speed and intensity of the review process was based on the voluntary and real-time, spontaneous actions of peers. Peer reviewers are rarely paid (though the cost to a journal for a published paper has been estimated in the region of £1000), but their actions are also rarely spontaneous. This takes us to the heart of one of the key characteristics of the peer review system as we know it: its place in a scholarly gift economy where the currency is esteem, reciprocity, and custom. We will return to this.

Back to top

The state of peer review

September’s Peer Review Week is always a time for reflection on peer review and this year especially so, with a theme of Peer review and the future of publishing. October also saw a COPE Forum on peer review models, and COPE’s recent Publication Integrity Week featured a lively session on AI and peer review (on which more in Part 2 of this Commentary). It has been good to see that there is a lot of optimism about new initiatives in peer review, but there is also some consensus that many aspects of scholarly publishing have been stretched almost to breaking point by recent rapid changes in scholarly publishing. These include open access and data sharing, fraudulent behaviour, the growth of journal special issues, and structural incentives in academia which reward a race to publish at any cost. Other factors have more positive motivations (such as welcome moves to improve inclusivity policies and to encourage articles which report null results or less novel methodologies), but still produce unprecedented strain on a pool of reviewers which is not keeping pace. The number of published scholarly articles almost doubled from 2016 to 2022 and rapid growth is likely to continue with the development of new platforms and types of output. The system was arguably never built to spot misconduct or problems with data, as it is now commonly expected to do.

Certainly, in his 2006 assessment of such practices at the BMJ Richard Smith was not positive about their success in improving reviewer reports or catching ethical breaches and errors. This is an even more pressing issue in 2023 for the reasons noted already. However, it is exacerbated by the persistent under-valuing of peer review by institutions and funders. A survey carried out by Taylor & Francis estimated that each reviewer spends an average of 4-6 hours on a single paper and yet they are almost always unpaid either in financial terms or in time allowed by their institution. The system therefore continues to rely almost exclusively on a gift economy of reciprocity and community spirit. COPE’s Ethical Guidelines for Peer Reviewers acknowledges this, stating that ‘[t]he peer review process depends to a large extent on the trust and willing participation of the scholarly community and requires that everyone involved behaves responsibly and ethically.’ The European Code of Conduct for Research Integrity includes the assertion that ‘[r]esearchers [must] take seriously their commitment and responsibility to the research community, through refereeing, reviewing, and assessment’. These are precisely the same as the most commonly reported reasons for why people do it: it gives them satisfaction, feeds into their sense of identity as a member of the academic community, and allows them to engage with – and trust – the latest advances in scholarship. It is also one of the ways in which early-career researchers learn the codes of academic reporting and ethical behaviour.

In many ways this is inspiration for our common endeavour to promote high standards of academic integrity and ethical practices in publishing. However, it does not sit well with many of the wider issues in the publishing landscape. Peer review is still the fundamental groundwork of academic publishing, but it sits on unstable foundations. How is it possible to sustain such a huge edifice on a dwindling proportion of authors, and in the face of ever more systemic and complex unethical practices?

Back to top

Fortunately, a number of organisations think that they have at least partial solutions to the problem. Systematic study of peer review has until recently been rare (excepting the periodic Peer Review Congress which dates back to 1989), but this is starting to change, with a growing number of scholars aligning themselves with an interdisciplinary field of Peer Review Studies. These scholars have started to quantify and test the impact of innovations like open review, reviewer training, and automation. Individual journals and publishers have also brought in new review models like portable or cascading review where reports are shared with other journals either via a central organisation or more informally. NISO has published a standard peer review terminology to aid transparency and understanding of these differences.

Others have started inviting specialist statistical review, or setting up ways of awarding credit to reviewers, for example, by publishing lists of reviewer names together with numbers and timeliness of reviews, fee reductions for future submissions, or more formal mechanisms such as Publons (now absorbed into Clarivate). A 2007 report on Peer review challenges for humanities and social sciences published by the British Academy recommended that all universities in receipt of public funds should not only ‘accept an obligation to encourage its researchers to engage in [review] activities, recognising that peer review is an essential part of the fabric of academic life’, but also that the costs should be met from funding council monies to support research infrastructure. Other organisations still are offering training courses for reviewers. 

There is also a growing number of organisations experimenting with different models of review and publication. BioMedCentral (BMC), for example, has operated on an open access basis since 1999 and includes journals in its portfolio which use patient peer review, results-free review (where reviewers do not see study results until after the initial assessment stage), and re-review opt out (more information on these processes can be found on their website). The publisher Frontiers  lists collaborative peer review as one of its innovations, based on an online forum with real-time interactions  between authors, reviewers, and editors. eLife adopted a new model of peer review in January 2023 which reviews on the basis of significance of findings and strength of evidence and invites public reviews following publication as a ‘Reviewed Preprint’. F1000, meanwhile, publishes papers after running prepublication checks for policies and ethical guidelines only. Expert reviewers are then invited to comment, and their names and reports are published alongside the article, and any responses from authors. A number of these innovations combine services from traditional journal models (indexing, peer review), with the speed and openness of preprint servers.

These organisations are demonstrating the value of experimentation and report high levels of submission and author satisfaction. However, despite this, and despite all the problems identified with traditional peer review, few seem to see any really viable alternative. When Scholarly Kitchen asked members of the Scholarly Publishing Society in 2022 whether research integrity was possible without peer review (part 1 and part 2), they received a variety of nuanced responses. However, no one gave a definite ‘yes’. Respondents mentioned benefits like the discovery of unintended flaws and study limitations, the value of external inspection, the possibility of identifying biases including racism (assuming that peer review is itself sufficiently diverse – which it is generally recognised not to be), and increasing trust in the system and its outputs. At the same time, they noted the contradiction in denigrating some aspects of the peer review system while simultaneously holding it up as the gold standard of academic integrity. It is also striking that most of the innovations in peer review have taken place at science journals; arts, humanities and social science journals lag behind, as do books, although the latter have a great variety of types of review process

Back to top

The future of peer review

It seems, then, that we are still reluctant to abandon the trust, esteem and reciprocity which form the basis of modern peer review. However, there is a growing consensus that the system needs some concerted improvements. One is greater transparency about peer review models, so that publishers and scholars alike can make better assessments of its utility. Another is to embed credit for reviewers, and a third concerns better training for reviewers, via in-house courses or more mentoring. A fourth set of recommendations surround the use of Artificial Intelligence in peer review, but we will return to that in Part 2.

We opened this Commentary by outlining the peer review revolution captured in the superconductor experiment. This certainly showed a more vibrant, accountable and participatory version of peer review than we have seen for a long time. We need now to think carefully about what we can learn from it: was it assessing what we expect peer review to assess; were there unintended outcomes from its openness and rapidity; could it be enhanced by the use of AI tools or did it benefit from being rooted in a community of (human) peers? Those working in scholarly publishing may be learning from this story long after the debate over superconductivity has been resolved.

Alysa Levene, COPE Operations Manager

Back to top

Related resources

Further reading

Notes

[1] The First Room-Temperature Ambient-Pressure Superconductor Sukbae Lee et al, arXiv (2023)

[2] Superconductor Pb10-xCux(PO4)6O showing levitation at room temperature and atmospheric pressure and mechanism Sukbae Lee et al, arXiv (2023)

[NB: activity on ArXiv is a search on papers which mention LK-99 in the abstract, submitted between 23 July 2023 and 15 August 2023].

Your comments

You must be logged in to leave a comment on the COPE website. If you are not a member of COPE you can create a guest account.

Comments will not appear immediately. We review comments on our website before publishing them, to ensure they are respectful and relevant.