Reproducibility

Does NIH fund innovative work? Does Nature care about publishing accurate articles?

_Editor’s Note: In a recent post we disagreed with a Nature article claiming that NIH doesn’t support innovation. Our colleague Steven Salzberg actually looked at the data and wrote the guest post below. _ Nature published an article last month with the provocative title “Research grants: Conform and be funded.” The authors looked at papers with over 1000 citations to find out whether scientists “who do the most influential scientific work get funded by the NIH.

Replication and validation in -omics studies - just as important as reproducibility

The psychology/social psychology community has made replication a huge focus over the last year. One reason is the recent, public blow-up over a famous study that did not replicate. There are also concerns about the experimental and conceptual design of these studies that go beyond simple lack of replication. In genomics, a similar scandal occurred due to what amounted to “data fudging”. Although, in the genomics case, much of the blame and focus has been on lack of reproducibility or data availability.

People in positions of power that don't understand statistics are a big problem for genomics

I finally got around to reading the IOM report on translational omics and it is very good. The report lays out problems with current practices and how these led to undesired results such as the now infamous Duke trials and the growth in retractions in the scientific literature. Specific recommendations are provided related to reproducibility and validation. I expect the report will improve things. Although I think bigger improvements will come as a result of retirements.

Replication, psychology, and big science

Reproducibility has been a hot topic for the last several years among computational scientists. A study is reproducible if there is a specific set of computational functions/analyses (usually specified in terms of code) that exactly reproduce all of the numbers in a published paper from raw data. It is now recognized that a critical component of the scientific process is that data analyses can be reproduced. This point has been driven home particularly for personalized medicine applications, where irreproducible results can lead to delays in evaluating new procedures that affect patients’ health.

Some thoughts from Keith Baggerly on the recently released IOM report on translational omics

Shortly after the Duke trial scandal broke, the Institute of Medicine convened a committee to write a report on translational omics. Several statisticians (including one of our interviewees) either served on the committee or provided key testimony. The report came out yesterday. Nature, Nature Medicine, and Science had posts about the release. Keith Baggerly sent an email with his thoughts and he gave me permission to post it here. He starts by pointing out that the Science piece has a key new observation:

Where do you get your data?

Here’s a question I get fairly frequently from various types of people: Where do you get your data? This is sometimes followed up quickly with “Can we use some of your data?” My contention is that if someone asks you these questions, start looking for the exits. There are of course legitimate reasons why someone might ask you this question. For example, they might be interested in the source of the data to verify its quality.

Preventing Errors through Reproducibility

Checklist mania has hit clinical medicine thanks to people like Peter Pronovost and many others. The basic idea is that simple and short checklists along with changes to clinical culture can prevent major errors from occurring in medical practice. One particular success story is Pronovost’s central line checklist which dramatically reduced bloodstream infections in hospital intensive care units. There are three important points about the checklist. First, it neatly summarizes information, bringing the latest evidence directly to clinical practice.

Reproducible Research in Computational Science

First of all, thanks to Rafa for scooping me with my own article. Not sure if that’s reverse scooping or recursive scooping or…. The latest issue of _Science_ has a special section on Data Replication and Reproducibility. As part of the section I wrote a brief commentary on the need for reproducible research in computational science. _Science_ has a pretty tight word limit for it’s commentaries and so it was unfortunately necessary to omit a number of relevant topics.

Reproducible Research and Turkey

Over the Thanksgiving recent break I naturally started thinking about reproducible research in between salting the turkey and making the turkey stock. Clearly, these things are all related. I sometimes get the sense that many people see reproducibility as essentially binary. A published paper is either reproducible, as in you can compute every single last numerical result to within epsilon precision, or it’s not. My feeling is that there is a spectrum of reproducibility when it comes to published scientific findings.

Reproducible research: Notes from the field

Over the past year, I’ve been doing a lot of talking about reproducible research. Talking to people, talking on panel discussions, and talking about some of my own work. It seems to me that interest in the topic has exploded recently, in part due to some recent scandals, such as the Duke clinical trials fiasco. If you are unfamiliar with the term “reproducible research”, the basic idea is that authors of published research should make available the necessary materials so that others may reproduce to a very high degree of similarity the published findings.

Interview with Victoria Stodden

Victoria Stodden Victoria Stodden is an assistant professor of statistics at Columbia University in New York City. She moved to Columbia after getting her Ph.D. at Stanford University. Victoria has made major contributions to the area of reproducible research and has been appointed to the NSF’s Advisory Committee for Infrastructure. She is the recent recipient of an NSF grant for “Policy Design for Reproducibility and Data Sharing in Computational Science”

[youtube http://www.youtube.com/watch?v=aH8dpcirW1U?wmode=transparent&autohide=1&egm=0&hd=1&iv_load_policy=3&modestbranding=1&rel=0&showinfo=0&showsearch=0&w=500&h=375] I gave a talk on reproducible research back in July at the Applied Mathematics Perspectives workshop in Vancouver, BC. In addition to the YouTube version, there’s also a Silverlight version where you can actually see the slides while I’m talking. (Source: http://www.youtube.com/)

Graduate student data analysis inspired by a high-school teacher

I love watching TED talks. One of my absolute favorites is the talk by Dan Meyer on how math class needs a makeover. Dan also has one of the more fascinating blogs I have read. He talks about math education, primarily K-12 education. His posts on curriculum design, assessment , work ethic, and homework are really, really good. In fact, just go read all his author choices. You won’t regret it.

Errors in Biomedical Computing

Biomedical Computation Review has a nice summary (in which I am quoted briefly) by Kristin Sainani about the many different types of errors in computational research, including the infamous Duke incident and some other recent examples. The reproducible research policy at _Biostatistics_ is described as an example for how the publication process might need to change to prevent errors from persisting (or occurring).

The Duke Saga

For those of you that don’t know about the saga involving genomic signatures, I highly recommend reading this very good summary published in The Economist. Baggerly and Coombes are two statisticians that can confidently say they have made an impact on clinical research and actually saved lives. A paper by this pair describing the details was published in the Annals of Applied Statistics as most of the Biology journals refused to publish their letters to the editor.