Open peer review is the various possible modifications of the traditional scholarly peer review process. The three most common modifications to which the term is applied are:
These modifications are supposed to address various perceived shortcomings of the traditional scholarly peer review process, in particular its lack of transparency, lack of incentives, wastefulness, bullying and harassment.[3]
In 1999, the open access journal Journal of Medical Internet Research[4] was launched, which from its inception decided to publish the names of the reviewers at the bottom of each published article. Also in 1999, the British Medical Journal moved to an open peer review system, revealing reviewers' identities to the authors but not the readers, and in 2000, the medical journals in the open access BMC series[5] published by BioMed Central, launched using open peer review. As with the BMJ, the reviewers' names are included on the peer review reports. In addition, if the article is published the reports are made available online as part of the "pre-publication history"'.
Several other journals published by the BMJ Group allow optional open peer review,[6] as does PLoS Medicine, published by the Public Library of Science.[7] The BMJs Rapid Responses allows ongoing debate and criticism following publication.[8]
In June 2006, Nature launched an experiment in parallel open peer review: some articles that had been submitted to the regular anonymous process were also available online for open, identified public comment. The results were less than encouraging – only 5% of authors agreed to participate in the experiment, and only 54% of those articles received comments.[9] [10] The editors have suggested that researchers may have been too busy to take part and were reluctant to make their names public. The knowledge that articles were simultaneously being subjected to anonymous peer review may also have affected the uptake.
In February 2006, the journal Biology Direct was launched by BioMed Central, adding another alternative to the traditional model of peer review. If authors can find three members of the Editorial Board who will each return a report or will themselves solicit an external review, the article will be published. As with Philica, reviewers cannot suppress publication, but in contrast to Philica, no reviews are anonymous and no article is published without being reviewed. Authors have the opportunity to withdraw their article, to revise it in response to the reviews, or to publish it without revision. If the authors proceed with publication of their article despite critical comments, readers can clearly see any negative comments along with the names of the reviewers.[11] In the social sciences, there have been experiments with wiki-style, signed peer reviews, for example in an issue of the Shakespeare Quarterly.[12]
In 2010, the BMJ began publishing signed reviewer's reports alongside accepted papers, after determining that telling reviewers that their signed reviews might be posted publicly did not significantly affect the quality of the reviews.[13]
In 2011, Peerage of Science, an independent peer review service, was launched with several non-traditional approaches to academic peer review. Most prominently, these include the judging and scoring of the accuracy and justifiability of peer reviews, and concurrent usage of a single peer review round by several participating journals. Peerage of Science went out of business only a few year after it was founded, because it could attract neither enough publishers nor enough reviewers.
Starting in 2013 with the launch of F1000Research, some publishers have combined open peer review with postpublication peer review by using a versioned article system. At F1000Research, articles are published before review, and invited peer review reports (and reviewer names) are published with the article as they come in.[14] Author-revised versions of the article are then linked to the original. A similar postpublication review system with versioned articles is used by Science Open launched in 2014.[15]
Also in 2013, researchers from College of Information and Computer Sciences at University of Massachusetts Amherst founded OpenReview website[16] to host anonymized review reports together with articles, which is as of 2023 popular among computer scientists.
In 2014, Life implanted an open peer review system,[17] under which the peer-review reports and authors' responses are published as an integral part of the final version of each article.
Since 2016, Synlett is experimenting with closed crowd peer review. The article under review is sent to a pool of 80+ expert reviewers who then collaboratively comment on the manuscript.[18]
In an effort to address issues with the reproducibility of research results, some scholars are asking that authors agree to share their raw data as part of the peer review process.[19] As far back as 1962, for example, a number of psychologists have attempted to obtain raw data sets from other researchers, with mixed results, in order to reanalyze them. A recent attempt resulted in only seven data sets out of fifty requests. The notion of obtaining, let alone requiring, open data as a condition of peer review remains controversial.[20] In 2020 peer review lack of access to raw data led to article retractions in prestigious The New England Journal of Medicine and The Lancet. Many journals now require access to raw data to be included in peer review.[21]
These publishers and journals operate various types of open peer review:
Peer review at The BMJ,[24] BioMed Central,[25] EMBO,[26] eLife,[27] ReScience C, and the Semantic Web journal[28] involves posting the entire pre-publication history of the article online, including not only signed reviews of the article, but also its previous versions and in some cases names of handling editors and author responses to the reviewers. Furthermore, the Semantic Web journal publishes reviews of all submissions, including rejected ones, on its website, while eLife plans to publish the reviews not only for published articles, but also for rejected articles.
The European Geosciences Union operates public discussions where open peer review is conducted before suitable articles are accepted for publication in the journal.[29]
Sci, an open access journal which covers all research fields, adapted a post publication public peer-review (P4R) in which it promised authors immediate visibility of their manuscripts on the journal's online platform after a brief and limited check of scientific soundness and proper reporting and against plagiarism and offensive material; the manuscript is rendered open for public review by the entire community.[30] [31] [32] [33]
In 2021, the authors of nearly half of the articles published by Nature chose to publish the reviewer reports as well. The journal considers this as an encouraging trial of transparent peer review.
Some platforms, including some preprint servers, facilitate open peer review of preprints.
Open identities have been argued to incite reviewers to be "more tactful and constructive" than they would be if they could remain anonymous, while however allowing authors to accumulate enemies who try to keep their papers from being published or their grant applications from being successful.[35]
Open peer review in all its forms has been argued to favour more honest reviewing, and to prevent reviewers from following their individual agendas.[36]
An article by Lonni Besançon et al. has also argued that open peer review helps evaluate the legitimacy of manuscripts that contain editorial conflict of interests; the authors argue that the COVID-19 pandemic has spurred many publishers to open up their review process, increasing transparency in the process.[37]
In an experiment with 56 research articles accepted by the Medical Journal of Australia in 1996–1997, the articles were published online together with the peer reviewers' comments; readers could email their comments and the authors could amend their articles further before print publication.[38] The investigators concluded that the process had modest benefits for authors, editors and readers.
Some studies have found that open identities lead to an increase in the quality of reviews, while other studies find no significant effect.[39]
Open peer review at BMJ journals has lent itself to randomized trials to study open identity and open report reviews. These studies did not find that open identities and open reports significantly affected the quality of review or the rate of acceptance of articles for publication, and there was only one reported instance of a conflict between authors and reviewers ("adverse event"). The only significant negative effect of open peer review was "increasing the likelihood of reviewers declining to review".[2] [40]
In some cases, open identities have helped detect reviewers' conflicts of interests.[41]
Open participation has been criticised as being a form of popularity contest in which well known authors are more likely to get their manuscripts reviewed than others.[42] However, even with this implementation of Open Peer Reviews, both authors and reviewers acknowledged that Open Reviews could lead to a higher quality of reviews, foster collaborations and reduce the "cite-me" effect.
According to a 2020 Nature editorial, experience from Nature Communications negates the concerns that open reports would be less critical, or would require an excessive amount of work from reviewers.
Thanks to published reviewer comments, it is possible to conduct quantitative studies of the peer review process. For example, a 2021 study has found that scrutiny by more reviewers mostly does not correlate with more impactful papers.