So peer review is broken, according to a recent piece in The Scientist: it’s slow, biased, arbitrary and overloads the community. But does post-publication review help to remedy its flaws? A new study by Peter Gotzsche and colleagues in BMJ ( see also linked editorial) aims to examine this question. Gotzsche and colleagues examine the severity of criticism, and adequacy of authors’ response, to BMJ papers via their online “Rapid Responses” system. (The journal’s policy regarding rapid responses is outlined in a previous editorial which highlights that discussions that contribute substantially to the topic will be published online).
The key findings from Gotzsche and colleagues could be seen as a wake-up to those who have proposed replacing traditional peer review – either in whole or in part – with a post-publication model of evaluation. Of the 105 research papers published in the journal between October 2005 and September 2007 and attracting substantive criticism (defined as “a problem that could invalidate the research or reduce its reliability”), only 47 received a subsequent response to the criticisms from the authors of the original paper. The adequacy of authors’ replies were assessed by two editors of the journal, and by the individuals originally posting their criticisms: the editors thought that most of the authors (either 27 or 29, depending on which editor was rating replies) had fully addressed the criticisms set out, whilst only 6 of the critics felt that their points had been responded to adequately. So, even using the editors’ more favourable judgement as a baseline, just under a third of the papers originally receiving potentially invalidating criticisms subsequently had an acceptable defence from the authors. The researchers suggest that editors may be too keen to accept sub-par responses from authors in a bid to defend the journal’s prestige, and authors (trying to protect their own reputations and that of their published work) are similarly reluctant to respond.
In the linked editorial, David Schriger and Doug Altman discuss some of the possible reasons for this failure of adequate post-publication peer review. They comment that the prevailing culture – in which “authors feel no obligation to respond…” and in which critics “fear they will be perceived as picky or anti-collegial…” works against self-correction in the scientific literature. They propose that less research, not more, is needed: fewer papers, of higher quality. So it’s all the more surprising that these findings, and proposals, have arisen in the context of a study of a journal with a highly stringent pre-publication peer review process. According to the latest data available via its audit, BMJ accepted just under 4% of papers submitted in 2007. It’s also interesting to note that Peter Gotzsche and colleagues’ study may highlight problems with pre-publication peer review — to find 105 articles with “substantive criticism” the researchers had to screen just 305 eligible papers in the journal — so nearly a third of papers published during the eligible timeframe received comments relating to validity or reliability. We already know, however, from previous research (eg a 2008 study) that peer reviewers often fail to find major errors in submitted manuscripts.
Posting policy for online comments at PLoS journals differs from the Rapid Responses system at BMJ: all comments on a PLoS journal article are posted immediately, providing they relate to the article under discussion, and conform to the norms of civilized scientific discussion (insulting or inflammatory language is not allowed). We routinely publish a full dataset reporting, for every PLoS paper, the number of accesses, comments and ratings. This shows that in the latest dataset (reporting data to early 2010), 9890 articles had been published in PLoS ONE which collected 750 note threads and 2129 comment threads (notes are generally used to highlight minor points or to make links to other content, whilst comments are for more general, lengthy discussions). Overall around 17% of papers in this dataset received some kind of online commentary – either a note or a correction. However to date PLoS has not attempted an analysis aiming to evaluate the content of these types of discussions. Importantly, PLoS ONE does not aim to select papers pre-publication for interest or importance, rather editorial criteria are entirely oriented around methodological rigour and quality of reporting. We therefore hope that post-publication discussions will also incorporate elements of subjective assessment of the importance / interest of research – either via notes and comments, or the ratings system. In addition PLoS is keen to experiment with new approaches towards peer review — for example with the launch of moderated fora (such as PLoS Currents: Influenza) dedicated to rapid publication of data and findings.