Several bloggers are calling for the end of peer-reviewed journals as we know them. Jeff suggest that we replace them with a system in which everyone posts their papers on their blog, pubmed aggregates the feeds, and peer-review happens post publication via, for example, counting up like and dislike votes. In my view, many of these critiques seem to conflate problems from different aspects of the process. Here I try to break down the current system into its key components and defend the one aspect I think we should preserve (at least for now): pre-publication peer-review.
To avoid confusion let me start by enumerating some of the components for which I agree change is needed.
- There is no need to produce paper copies of our publications. Indulging our preference for reading hard copies does not justify keeping the price of disseminating our work twice as high as it should be.
- There is no reason to be sending the same manuscript (adapted to fit guidelines) to several journals, until it gets accepted. This frustrating and time-consuming process adds very little value (we previously described Nick Jewell’s solution).
- There is no reason for publications to be static. As Jeff and many others suggest, readers should be able to comment and rate systematically on published papers and authors should be able to update them.
However, all these changes can be implemented without doing away with pre-publication peer-review.
A key reason American and British universities consistently lead the pack of research institutions is their strict adherence to a peer-review system that minimizes cronyism and tolerance for mediocrity. At the center of this system is a promotion process in which outside experts evaluate a candidate’s ability to produce high quality ideas. Peer-reviewed journal articles are the backbone of this evaluation. When reviewing a candidate I familiarize myself with his or her work by reading 5-10 key papers. It’s true that I read these disregarding the journal and blog posts would serve the same purpose. But I also use the publication section of the CV not only because reading all papers is logistically impossible but because these have already been evaluated by ~ three referees plus an editor and provide an independent assessment to mine. I also use the journal’s prestige because although it is a highly noisy measure of quality, the law of large numbers starts kicking in after 10 papers or so.
So are three reviewers better than the entire Internet? Can a reddit-like system provide just as much signal as the current peer-reviewed journal? You can think of the current system as a cooperative in which we all agree to read each other’s papers thoroughly (we evaluate 2-3 for each one we publish) with journals taking care of the logistics. The result of a review is an estimate of quality ranging from highest (Nature, Science) to 0 (not published). This estimate is certainly noisy given the bias and quality variance of referees and editors. But, across all papers on a CV variance is reduced and bias averages out (I note that we complain vociferously when the bias keeps us from publishing in a good journal, but we rarely say a word when the bias helps us get into a better journal than deserved). Jeff’s argument is that post-publication review will result in many more evaluations and therefore a stronger signal to noise ratio. I need to see evidence of this before being convinced. In the current system ~ three referees commit to thoroughly reviewing of the paper. If they do a sloppy job, they will embarrass themselves with an editor or an AE (not a good thing). With the post-publication review system nobody is forced to review. I fear most papers will go without comment or votes, including really good ones. My feeling is that marketing and PR will matter even more than it does now and that’s not a good thing.
Dissemination of ideas is another important role of the literature. Jeff describes a couple of anecdotes to argue it can be sped up by just posting it on your blog.
I posted a quick idea called the Leekasso, which led to some discussion on the blog, has nearly 2,000 page views
But the typical junior investigator does not have a blog with hundreds of followers. Will their papers ever be read if even more papers are added to the already bloated scientific literature? The current peer-review system provides an important filter. There is an inherent trade-off between speed of dissemination and quality and it’s not clear to me that we should swing the balance all the way over to the speed side. There are other ways to speed up dissemination that we should try first. Also there is nothing stopping us from posting our papers online before publication and promoting them via twitter or an aggregator. In fact, as pointed out by Jan Jensen on Jeff’s post, arXiv papers are indexed on Google Scholar within a week, which also keeps track of arXiv citations.
The Internet is bringing many changes that will improve our peer-review system. But the current pre-publication peer-review process does a decent job of
- providing signal for the promotion process and
- reducing noise in the literature to make dissemination possible.
Any alternative systems should be evaluated carefully before dismantling a system that has helped keep our Universities at the top of the world rankings.