Why the current over-pessimism about science is the perfect confirmation bias vehicle and we should proceed rationally

Jeff Leek
2013-05-06

Recently there have been some high profile flameouts in scientific research. A couple examples include the Duke saga, the replication issues in social sciences, p-value hacking, fabricated data, not enough open-access publication, and on and on.

Some of these results have had major non-scientific consequences, which is the reason they have drawn so much attention both inside and outside of the academic community. For example, the Duke saga Recently there have been some high profile flameouts in scientific research. A couple examples include [the Duke saga](http://simplystatistics.org/2012/02/27/the-duke-saga-starter-set/), [the replication issues in social sciences](http://simplystatistics.org/2012/07/03/replication-and-validation-in-omics-studies-just-as/), [p-value hacking](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1850704), [fabricated data](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2114571&http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2114571), [not enough open-access publication](http://www.michaeleisen.org/blog/?p=1312), and on and on. , the lack of replication has led to high-profile arguments between scientists in Discover and Nature among other outlets, and the Recently there have been some high profile flameouts in scientific research. A couple examples include [the Duke saga](http://simplystatistics.org/2012/02/27/the-duke-saga-starter-set/), [the replication issues in social sciences](http://simplystatistics.org/2012/07/03/replication-and-validation-in-omics-studies-just-as/), [p-value hacking](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1850704), [fabricated data](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2114571&http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2114571), [not enough open-access publication](http://www.michaeleisen.org/blog/?p=1312), and on and on. (sometimes comically) because of a lack of reproducibility.

The result of this high-profile attention is that there is a movement on to “clean up science”. As has been pointed out, there is a group of scientists who are making names for themselves primarily as critics of what is wrong with the scientific process. The good news is that these key players are calling attention to issues: reproducibility, replicability, and open access, among others, that are critically important for the scientific enterprise.

I too am concerned about these issues and have altered my own research process to try to address them for my own research group.  I also think that the solutions others have proposed on a larger scale like alltrials.net or PLoS are great advances for the scientific community.

I am also very worried that people are using a few high-profile cases to hyperventilate about the real, solvable, and recognized problems in the scientific process These people get credit and a lot of attention for pointing out how science is “failing”. But they aren’t giving proportional time to all of the incredible success stories we have had, both in performing research and in reforming research with reproducibility, open access, and replication initiatives.

We should recognize that science is hard and even dedicated, diligent, and honest scientists will make mistakes , perform irreproducible or irreplicable studies, or publish in closed access journals.  Sometimes this is because of ignorance of good research principles, sometimes it is because people are new to working in a world where data/computation are a major player, and some will be because it is legitimately, really hard to make real advances in science. I think people who participate in real science recognize these problems and are eager to solve them. I also have noticed that real scientists generally try to propose a solution when they complain about these issues.

But it seems like sometimes people use these high-profile mistakes out of context to push their own scientific pet peeves. For example:

  1. I don’t like p-values and there are lots of results that fail to replicate so it must be the fault of p-values.  Many studies fail to replicate not because the researchers used p-values, but because they performed studies that were either weak or had poorly understood scientific mechanisms.
  2. I don’t like not being able to access people’s code so lack of reproducibility is causing science to fail. Even in the two most infamous cases (Potti and Reinhart - Rogoff) the problem with the science wasn’t reproducibility - it was that the analysis was incorrect/flawed. Reproducibility compounded the problem but wasn’t the root cause of the problem.
  3. I don’t like not being able to access scientific papers so closed-access journals are evil. For whatever reason (I don’t know if I understand why) it is expensive to publish journals. Clearly, because publishing open access is expensive and closed access journals are expensive. If I’m a junior researcher, I’ll definitely post my preprints online, but I also want papers in “good” journals and don’t have a ton of grant money, so sometimes I’ll choose close access.
  4. I don’t like these crazy headlines from social psychology (substitute other field here) and there have been some that haven’t replicated, so none must replicate. Of course some papers won’t replicate, including even high profile papers. If you are doing statistics, then by definition some papers won’t replicate since you have to make a decision on noisy data.

These are just a few examples where I feel like a basic, fixable flaw in science has been used to justify a hugely pessimistic view of science in general. I’m not saying it is all rainbows and unicorns. Of course we want to improve the process. But I’m worried that the rational reasonable problems we have, with enough hyperbole, will make it look like the scientific process “sky is falling” and will leave the door open for individuals like Rep. Lamar Smith to come in and turn the scientific process into a political one.

P.S. Andrew Gelman posted on a similar topic yesterday as well.. He argues the case for less optimism and to make sure we don’t stay complacent. He added a P.S. and mentioned two points on which we can agree: (1) science is hard and is a human system and we are working to fix the flaws inherent in such systems and (2) that it is still easier to publish as splashy claim than to publish a correction. I do definitely agree with both. I think Gelman would also likely agree that we need to be careful about reciprocity with these issues. If earnest scientists work hard to address reproducibility, replicability, open access, etc. then people who criticize them should have to work just as hard to justify their critiques. Just because it is a critique doesn’t mean it should automatically get the same treatment as the original paper.