I was listening to the Effort Report Episode on The Messy Execution of Reproducible Research where they were discussing the piece about Amy Cuddy in the New York Times. I think both the article and the podcast did a good job of discussing the nuances of the importance of reproducibility and the challenges of the social interactions around this topic. After listening to the podcast I realized that I see a lot of posts about reproducibility/replicability, but many of them are focused on the technical side. So I started to think about compiling a list of more cultural things we can do to reduce the stress/pressure around the reproducibility crisis.
I’m sure others have pointed these out in other places but I am procrastinating writing something else so I’m writing these down while I’m thinking about them :).
- We can define what we mean by “reproduce” and “replicate” Different fields have different definitions of the words reproduce and replicate. If you are publishing a new study we now have an R package that you can use to create figures that show what changed and what was the same betweeen the original study and your new work. Defining concretely what was the same and different will reduce some of the miscommunication about what a reproducibility/replicability study means.
- We can remember that replication is statistical, not deterministic If you are doing a new study where you re-collect data according to a protocol from another group - you should not expect to get exactly the same answer. So if a result is statistically significant in one study and not significant in another, that may be within the bounds of what we’d expect to see.
- We can remember that there is a difference between exploratory and confirmatory research There is a reason that randomized trials are the basis for regulatory decisions by the FDA and others. But if we require every single study to meet the requirements of a pre-registered, randomized, double blind controlled trial with a huge sample size we might miss some important discoveries.
- We can remember that a failed replication isn’t always a scientific failure One thing Roger and Elizabeth point out is that many scientific studies won’t replicate - nor should we expect all studies to replicate. Sometimes a study is preliminary, exploratory, or the first observation of an unusual event. It may be a perfectly well executed study, and not replicate because the sample size was too small or there was an unmodeled confounder. This doesn’t mean that the scientific study was a failure, it just was an observation that didn’t pan out.
- We can stop publicizing scientific results as solutions so quickly University press offices, startup companies, and researchers stressed for funding are under pressure to label every discovery as a “cure”, a “diagnosis”, a “solution to the crisis”. A lot of the frustration in the scientific community arises from this overstatement of results. It is hard to escape this (for me too!) - but we can excercise skepticism about claims of solutions on the basis of single scientific papers.
- We can be persistent and private as long as possible Like many people I’ve run into frustrating cases where data isn’t available from a paper that has been published. I have contacted the authors only to be rebuffed. I have found that it takes work to convince them to provide the data, but I can often do it without having to resort to publicizing the problems and trying to make it adversarial.
- We can make the realization that data is valuable but in science you don’t own it There is still discussion of data parasites and data symbionts. I have been both a data collector and a data analyst. I realize there is frustration to releasing your data and seeing others quickly publish ideas you may have had. At the same time I’ve seen how frustrating it can be to see people keep their data private and inaccessable indefinitely after publication. The reality is that people do deserve credit for collecting data, but that they don’t own the data they collect.
- We should cut each other some slack I think that a lot of the frustration around reproducibility and replicability can come from the way the problem is approached. On the one hand, if you publish a scientific paper and someone tries to reproduce or replicate your work you can realize that they are doing that because they are interested and try to help them. Even if they find some flaws (as they inevitably will) or the study doesn’t replicate (as it might not) that is not a failure by you. On the other hand if you are reproducibing or replicating, remember that the scientist on the other end is a person. That person is subject to all the same funding, publication, and promotion stresses as everyone else. So we should try not to make the discovery of reproducibility/replicability problems high profile “gotchas” but focus on pointing out the real scientific issues that arise and trying to move forward in a positive way.
Like many others, I have noticed that the stakes around reproducibility/replicability have been ratcheted up by social media and the rise in the prominence of the field of reproducibility. As someone who has experienced the real stress all of this can create in the life of real scientists, I’d love to see if we could move forward in a way that was more positive and collaborative while we address the explosion of data in the scientific enterprise.