21
Feb

Here's why the scientific publishing system can never be "fixed"

Tweet about this on Twitter104Share on Facebook44Share on Google+16Share on LinkedIn4Email this to someone

There's been much discussion recently about how the scientific publishing system is "broken". Just the latest one that I saw was a tweet from Princeton biophysicist Josh Shaevitz:

On this blog, we've talked quite a bit about the publishing system, including in this interview with Michael Eisen. Jeff recently posted about changing the reviewing system (again). We have a few other posts on this topic. Yes, we like to complain like the best of them.

But there's a simple fact: The scientific publishing system, as broken as you may find it to be, can never truly be fixed.

Here's the tl;dr

  • The collection of scientific publications out there make up a marketplace of ideas, hypotheses, theorems, conjectures, and comments about nature.
  • Each member of society has an algorithm for placing a value on each of those publications. Valuation methodologies vary, but they often include factors like the reputation of the author(s), the journal in which the paper was published, the source of funding, as well as one's own personal beliefs about the quality of the work described in the publication.
  • Given a valuation methodology, each scientist can rank order the publications from "most valuable" to "least valuable".
  • Fixing the scientific publication system would require forcing everyone to agree on the same valuation methodology for all publications.

The Marketplace of Publications

The first point is that the collection of scientific publications make up a kind of market of ideas. Although we don't really "trade" publications in this market, we do estimate the value of each publication and label some as "important" and some as not important. I think this is important because it allows us to draw analogies with other types of markets. In particular, consider the following question: Can you think of a market in any item where each item was priced perfectly, so that every (rational) person agreed on its value? I can't.

Consider the stock market, which might be the most analyzed market in the world. Professional investors make their entire living analyzing the companies that are listed on stock exchanges and buying and selling their shares based on what they believe is the value of those companies. And yet, there can be huge disagreements over the valuation of these companies. Consider the current Herbalife drama, where investors William Ackman and Carl Icahn (and Daniel Loeb) are taking complete opposite sides of the trade (Ackman is short and Icahn is long). They can't both be right about the valuation; they must have different valuation strategies. Everyday, the market's collective valuation of different companies changes, reacting to new information and perhaps to irrational behavior. In the long run, good companies survive while others do not. In the meantime, everyone will argue about the appropriate price.

Journals are in some ways like the stock exchanges of yore. There are very prestigious ones (e.g. NYSE, the "big board") and there are less prestigious ones (e.g. NASDAQ) and everyone tries to get their publication into the prestigious journals. Journals have listing requirements--you can't just put any publication in the journal. It has to meet certain standards set by the journal. The importance of being listed on a prestigious stock exchange has diminished somewhat over the years. The most valuable company in the world trades on the NASDAQ.  Similarly, although Science, Nature, and the New England Journal of Medicine are still quite sought after by scientists, competition is increasing from journals (such as those from the Public Library of Science) who are willing to publish papers that are technically correct and let readers determine their importance.

What's the "Fix"?

Now let's consider a world where we obliterate journals like Nature and Science and that there's only the "one true journal". Suppose this journal accepts any publication that satisfies some basic technical requirements (i.e. not content-based) and then has a sophisticated rating system that allows readers to comment on, rate, and otherwise evaluate each publication. There is no pre-publication peer review. Everything is immediately published. Problem solved? Not really, in my opinion. Here's what I think would end up happening:

  • People would have to (slightly) alter their methodology for ranking individual scientists. They would not be able to say "so-and-so has 10 Nature papers, so he must be good". But most likely, another proxy for actually reading the appears would arise. For example, "My buddy from University of Whatever put this paper in his top-ten list, so it must be good". As Michael Eisen said in our interview, the ranking system induced by journals like Science and Nature is just an abstract hierarchy; we can still reproduce the hierarchy even if Science/Nature don't exist.
  • In the current system, certain publications often "get stuck" with overly inflated valuations and it is often difficult to effectively criticize such publications because there does not exist an equivalent venue for informed criticism on par with Science and Nature. These publications achieve such high valuations partly because they are published in high-end journals like Nature and Science, but partly it is because some people actually believe they are valuable. In other words, it is possible to create a "bubble" where people irrationally believe a publication is valuable, just because everyone believes it's valuable. If you destroy the current publication system, there will still be publications that are "over-valued", just like in every other market. And furthermore, it will continue to be difficult to criticize such publications. Think of all the analysts that were yelling about how the housing market was dangerously inflated back in 2007. Did anyone listen? Not until it was too late.

What Can be Done?

I don't mean for this post to be depressing, but I think there's a basic reality about publication that perhaps is not fully appreciated. That said, I believe there are things that can be done to improve science itself, as well as the publication system.

  • Raise the ROC curves of science. Efforts in this direction make everyone better and improve our ability to make more important discoveries.
  • Increase the reproducibility of science. This is kind of the "Sarbanes-Oxley" of science. For the most part, I think the debate about whether science should be made more reproducible is coming to a close (or it is for me). The real question is how do we do it, for all scientists? I don't think there are enough people thinking about this question. It will likely be a mix of different strategies, policies, incentives, and tools.
  • Develop more sophisticated evaluation technologies for publications. Again, to paraphrase Michael Eisen, we are better able to judge the value of a pencil on Amazon than we are able to judge a scientific publication. The technology exists for improving the system, but someone has to implement it. I think a useful system along these lines would go a long way towards de-emphasizing the importance of "vanity journals" like Nature and Science.
  • Make open access more accessible. Open access journals have been an important addition to the publication universe, but they are still very expensive (the cost has just been shifted). We need to think more about lowering the overall cost of publication so that it is truly open access.

Ultimately, in a universe where there are finite resources, a system has to be developed to determine how those resources should be distributed. Any system that we can come up with will be flawed as there will by necessity have to be winners and losers. I think there are serious efforts that need to be made to make the system more fair and more transparent, but the problem will never truly be "fixed" to everyone's satisfaction.

  • Chen

    I feel a good review should do three things:
    1) Improve the paper with helpful suggestions

    2) Detect plagiarism, falsehoods, misleading or incomplete information

    3) Give a thumbs-up or thumbs-down based on 'novelty' and perceived impact (some hazy notion of weighted influence)

    I feel that 1) can really be up-and-down. 2) holds up reasonably well; prestigious journals, at least in more mathy fields, do an OK job at avoiding type-2 errors. 3) seems to be the main focus of this post. 3) is useful for everyone in that it ideally saves us time and effort.

    I feel you are missing the economics of the matter. The stock market (indeed, liquid markets in general) has the nice property of forcing people to account not only for their own valuations, but those of others. The key here is that people who are right are rewarded, and people who are wrong are punished in a very direct way. This is much weaker, and fraught with other problems in an honour system as in the science publishing review system. Honour systems become fragile when there is a shift to money systems -- something that is widespread in science due to declining government funding.

    Even disregarding principle-agent problems (something most scientists and engineers are prone to do), there is another issue with the review system. The critics are a sideshow, when they should be fairly important. In your post you state the 'fix' is having universal standards of reviewing. I disagree. Having heterogeneous but individually consistent critics would be tremendously valuable. Read papers that your clones like. Equivalently, read papers that your opposites dislike. How to get critics to be honest once they are in the spotlight is the real problem.

    I feel that journals are still the appropriate venue for 1) and 2), but 3) is falling apart due to the increasing quantification of research.

  • Tal

    This strikes me as a bit of a silly equivocation on what the term "fix" means. It seems pretty clear that when most people say they'd like to "fix" the publishing system, they're not suggesting the system could ever be made perfect. I'd be very surprised if Michael Eisen or anyone else who's advocated for large changes to the publishing system thinks that the problem of bias can ever be completely eliminated! I think it's pretty clear to everyone who's advocating for post-publication review platforms and a restructuring of elimination of conventional journals that any system we ultimately end up with will have some problems. The point, however, is that all problems are not equal.

    What you seem to be implying--otherwise the point of the post is unclear, because who would argue with the basic assertion that nothing is ever perfect?--is that the kind of overvaluation we currently have of papers published in Science or Nature is equivalent in magnitude to what we'd have under a system with widespread post-publication evaluation filters adapted from platforms like Amazon, Netflix, and reddit. Considering how effective collaborative filtering approaches have been in just about every other domain they've been applied in, this seems like a rather bad bet to make. Surely one can accept that there will always be some distortion of the "true" value of a research finding (if such a notion is even coherent) while also believing that a centralized post-publication platform that lets anyone anywhere in the world comment on and iteratively evaluate any document is likely to produce much less bias than a completely decentralized system that lacks any obvious means for registering and evaluating commentary on articles. I think it's pretty clear that that is what people typically mean when they say they want to "fix" the system.

  • jwoodgett

    The inherent and seemingly unshakable appeal of the terribly flawed JIF as a means to "sort" quality indicates that scientists seek guidance in focussing on what to read, what is interesting, among he sea of publications. We seek tools and shortcuts to help navigate and refine our reading lists. This is similar to the increasing practise of selective lenses being applied the the gush of internet information. And it suffers from the same dangers. By setting filters based on biased or flawed parameters (JIF, political leaning, favourite writer, etc) we set up a self-reinforcement system that promotes popularity and squeezes out originality. It favours the established and makes it harder for novelty to break in. Eventually, our behaviors adapt to be like the filters we rely upon and our values become equal to those of the artificial altar to which we've subscribed. This is utterly disastrous to science (and bad enough for other domains wher it leads to polarization). If we know anything about science it is that 99% is incremental, mundane, predictable. It is the 1% (if we are lucky) that comprises 99% of true advance. We jump on the 1% when we see it and so this sliver of science drives all. Yet I wonder whether our publication system is suffocating this 1% much of the time. How would we know? Instead, publication should be designed to seek out and advertise that 1% of initial apparent madness. We, as a community, should spend far more effort on trying to identify and celebrate the uncomfortable science at the expense of band-wagoning the majority of research that has no need for help - it's doing just fine.

    * It's probably less than 1% but let's be optimistic.

  • EastCoastElitist

    An anecdote: I'm co-author on a manuscript which was submitted to a (the most?) prestigious journal in my field. The manuscript was rejected. Here's the thing though: The reviewers blew their reviews. (Two "phoned it in" and one swung and missed.) Major criticisms offered were simply incorrect. What's the workaround for that? In this case, we submitted the paper for a conference and it was accepted. It'll be published in the (unrefereed) conference proceedings. Perhaps we'll revise and submit to one of the society's journals. From my standpoint the important thing is that the paper get published and that interested parties have an opportunity to read it. Constructive criticism from readers who are genuinely interested is more valuable to me than comments from people selected at random by an editor. But suppose our paper hadn't been accepted for the conference though? (We submitted it post-deadline.) What would have been a reasonable fallback? Post it to ArXiv? Send our latest draft to potentially interested parties, incorporate their feedback, then post it on the lead author's website? (Which I'd be fine with aside from the fact that websites aren't archival repositories.)

  • Konrad Hinsen

    Your post is not so much about the publication system, but about the evaluation system. The main purpose of the publication system is to enable scientists to build on their peers' work. Evaluation is a separate concern and need not necessarily be based on bibliometry.