The Big in Big Data relates to importance not size

In the past couple of years several non-statisticians have asked me "what is Big Data exactly?" or "How big is Big Data?". My answer has been "I think Big Data is much more about "data" than "big". I explain below.

Screen Shot 2014-05-28 at 10.14.53 AM Screen Shot 2014-05-28 at 10.15.04 AM

Since 2011 Big Data has been all over the news. The New York Times, The Economist, Science, Nature, etc.. have told us that the Big Data Revolution is upon us (see google trends figure above). But was this really a revolution? What happened to the Massive Data Revolution (see figure above)? For this to be called a revolution, there must be some a drastic change, a discontinuity, or a quantum leap of some kind.  So has there been such a discontinuity in the rate of growth of data? Although this may be true for some fields (for example in genomics, next generation sequencing did introduce a discontinuity around 2007), overall, data size seems to have been growing at a steady rate for decades. For example, in the  graph below (see this paper for source) note the trend in internet traffic data (which btw dwarfs genomics data). There does seem to be a change of rate but during the 1990s which brings me to my main point.

internet data traffic

Although several fields (including Statistics) are having to innovate to keep up with growing data size, I don't see this as something that new. But I do think that we are in the midst of a Big Data revolution.  Although the media only noticed it recently,  it started about 30 years ago. The discontinuity is not in the size of data, but in the percent of fields (across academia, industry and government) that use data. At some point in the 1980s with the advent of cheap computers, data were moved from the file cabinet to the disk drive. Then in the 1990s, with the democratization of the internet, these data started to become easy to share. All of the sudden, people could use data to answer questions that were previously answered only by experts, theory or intuition.

In this blog we like to point out examples but let me review a few. Credit card companies started using purchase data to detect fraud. Baseball teams started scraping data and evaluating players without ever seeing them. Financial companies started analyzing  stock market data to develop investment strategies. Environmental scientists started to gather and analyze data from air pollution monitors. Molecular biologists started quantifying outcomes of interest into matrices of numbers (as opposed to looking at stains on nylon membranes) to discover new tumor types and develop diagnostics tools. Cities started using crime data to guide policing strategies. Netflix started using costumer ratings to recommend movies. Retail stores started mining bonus card data to deliver targeted advertisements. Note that all the data sets mentioned were tiny in comparison to, for example, sky survey data collected by astronomers. But, I still call this phenomenon Big Data because the percent of people using data was in fact Big.

IMG_5053

I borrowed the title of this talk from a very nice presentation by Diego Kuonen

Posted in Uncategorized | Tagged | 5 Comments

10 things statistics taught us about big data analysis

In my previous post I pointed out a major problem with big data is that applied statistics have been left out. But many cool ideas in applied statistics are really relevant for big data analysis. So I thought I'd try to answer the second question in my previous post: "When thinking about the big data era, what are some statistical ideas we've already figured out?" Because the internet loves top 10 lists I came up with 10, but there are more if people find this interesting. Obviously mileage may vary with these recommendations, but I think they are generally not a bad idea.

  1. If the goal is prediction accuracy, average many prediction models together. In general, the prediction algorithms that most frequently win Kaggle competitions or the Netflix prize blend multiple models together. The idea is that by averaging (or majority voting) multiple good prediction algorithms you can reduce variability without giving up bias. One of the earliest descriptions of this idea was of a much simplified version based on bootstrapping samples and building multiple prediction functions - a process called bagging (short for bootstrap aggregating). Random forests, another incredibly successful prediction algorithm, is based on a similar idea with classification trees.
  2. When testing many hypotheses, correct for multiple testing This comic points out the problem with standard hypothesis testing when many tests are performed. Classic hypothesis tests are designed to call a set of data significant 5% of the time, even when the null is true (e.g. nothing is going on). One really common choice for correcting for multiple testing is to use the false discovery rate to control the rate at which things you call significant are false discoveries. People like this measure because you can think of it as the rate of noise among the signals you have discovered. Benjamini and Hochber gave the first definition of the false discovery rate and provided a procedure to control the FDR. There is also a really readable introduction to FDR by Storey and Tibshirani.
  3. When you have data measured over space, distance, or time, you should smooth This is one of the oldest ideas in statistics (regression is a form of smoothing and Galton popularized that a while ago). I personally like locally weighted scatterplot smoothing a lot.  This paperis a good one by Cleveland about loess. Here it is in a gif. loessBut people also like smoothing splines, Hidden Markov Models, moving averages and many other smoothing choices.
  4. Before you analyze your data with computers, be sure to plot it A common mistake made by amateur analysts is to immediately jump to fitting models to big data sets with the fanciest computational tool. But you can miss pretty obvious things like this if you don't plot your data. baThere are too many plots to talk about individually, but one example of an incredibly important plot is the Bland-Altman plot, (called an MA-plot in genomics) when comparing measurements from multiple technologies. R provides tons of graphics for a reason and ggplot2 makes them pretty.
  5. Interactive analysis is the best way to really figure out what is going on in a data set This is related to the previous point; if you want to understand a data set you have to be able to play around with it and explore it. You need to make tables, make plots, identify quirks, outliers, missing data patterns and problems with the data. To do this you need to interact with the data quickly. One way to do this is to analyze the whole data set at once using tools like Hive, Hadoop, or Pig. But an often easier, better, and more cost effective approach is to use random sampling . As Robert Gentleman put it "make big data as small as possible as quick as possible".
  6. Know what your real sample size is.  It can be easy to be tricked by the size of a data set. Imagine you have an image of a simple black circle on a white background stored as pixels. As the resolution increases the size of the data increases, but the amount of information may not (hence vector graphics). Similarly in genomics, the number of reads you measure (which is a main determinant of data size) is not the sample size, it is the number of individuals. In social networks, the number of people in the network may not be the sample size. If the network is very dense, the sample size might be much less. In general the bigger the sample size the better and sample size and data size aren't always tightly correlated.
  7. Unless you ran a randomized trial, potential confounders should keep you up at night Confounding is maybe the most fundamental idea in statistical analysis. It is behind the spurious correlations like these and the reason why nutrition studies are so hard. It is very hard to hold people to a randomized diet and people who eat healthy diets might be different than people who don't in other important ways. In big data sets confounders might be technical variables about how the data were measured or they could be differences over time in Google search terms. Any time you discover a cool new result, your first thought should be, "what are the potential confounders?"correlation
  8. Define a metric for success up front Maybe the simplest idea, but one that is critical in statistics and decision theory. Sometimes your goal is to discover new relationships and that is great if you define that up front. One thing that applied statistics has taught us is that changing the criteria you are going for after the fact is really dangerous. So when you find a correlation, don't assume you can predict a new result or that you have discovered which way a causal arrow goes.
  9. Make your code and data available and have smart people check it As several people pointed out about my last post, the Reinhart and Rogoff problem did not involve big data. But even in this small data example, there was a bug in the code used to analyze them. With big data and complex models this is even more important. Mozilla Science is doing interesting work on code review for data analysis in science. But in general if you just get a friend to look over your code it will catch a huge fraction of the problems you might have.
  10. Problem first not solution backward One temptation in applied statistics is to take a tool you know well (regression) and use it to hit all the nails (epidemiology problems). hitnailsThere is a similar temptation in big data to get fixated on a tool (hadoop, pig, hive, nosql databases, distributed computing, gpgpu, etc.) and ignore the problem of can we infer x relates to y or that x predicts y.
Posted in Uncategorized | 9 Comments

Why big data is in trouble: they forgot about applied statistics

This year the idea that statistics is important for big data has exploded into the popular media. Here are a few examples, starting with the Lazer et. al paper in Science that got the ball rolling on this idea.

All of these articles warn about issues that statisticians have been thinking about for a very long time: sampling populations, confounders, multiple testing, bias, and overfitting. In the rush to take advantage of the hype around big data, these ideas were ignored or not given sufficient attention.

One reason is that when you actually take the time to do an analysis right, with careful attention to all the sources of variation in the data, it is almost a law that you will have to make smaller claims than you could if you just shoved your data in a machine learning algorithm and reported whatever came out the other side.

The prime example in the press is Google Flu trends. Google Flu trends was originally developed as a machine learning algorithm for predicting the number of flu cases based on Google Search Terms. While the underlying data management and machine learning algorithms were correct, a misunderstanding about the uncertainties in the data collection and modeling process have led to highly inaccurate estimates over time. A statistician would have thought carefully about the sampling process, identified time series components to the spatial trend, investigated why the search terms were predictive and tried to understand what the likely reason that Google Flu trends was working.

As we have seen, lack of expertise in statistics  has led to fundamental errors in both genomic science and economics. In the first case a team of scientists led by Anil Potti created an algorithm for predicting the response to chemotherapy. This solution was widely praised in both the scientific and popular press. Unfortunately the researchers did not correctly account for all the sources of variation in the data set and had misapplied statistical methods and ignored major data integrity problems. The lead author and the editors who handled this paper didn't have the necessary statistical expertise, which led to major consequences and cancelled clinical trials.

Similarly, two economists Reinhart and Rogoff, published a paper claiming that GDP growth was slowed by high governmental debt. Later it was discovered that there was an error in an Excel spreadsheet they used to perform the analysis. But more importantly, the choice of weights they used in their regression model were questioned as being unrealistic and leading to dramatically different conclusions than the authors espoused publicly. The primary failing was a lack of sensitivity analysis to data analytic assumptions that any well-trained applied statisticians would have performed.

Statistical thinking has also been conspicuously absent from major public big data efforts so far. Here are some examples:

One example of this kind of thinking is this insane table from the alumni magazine of the University of California which I found from this amazing talk by Terry Speed (via Rafa, go watch his talk right now, it gets right to the heart of the issue).  It shows a fundamental disrespect for applied statisticians who have developed serious expertise in a range of scientific disciplines.

Screen Shot 2014-05-06 at 9.06.38 PM

All of this leads to two questions:

  1. Given the importance of statistical thinking why aren't statisticians involved in these initiatives?
  2. When thinking about the big data era, what are some statistical ideas we've already figured out?

Posted in Uncategorized | 26 Comments

JHU Data Science: More is More

Today Jeff Leek, Brian Caffo, and I are launching 3 new courses on Coursera as part of the Johns Hopkins Data Science Specialization. These courses are

I'm particularly excited about Reproducible Research, not just because I'm teaching it, but because I think it's essentially the first of its kind being offered in a massive open format. Given the rich discussions about reproducibility that have occurred over the past few years, I'm happy to finally be able to offer this course for free to a large audience.

These courses are launching in addition to the first 3 courses in the sequence: The Data Scientist's Toolbox, R Programming, and Getting and Cleaning Data, which are also running this month in case you missed your chance in April.

All told we have 6 of the 9 courses in the Specialization available as of today. We're really looking forward to next month where we will be launching the final 3 courses: Regression Models, Practical Machine Learning, and Developing Data Products. We also have some exciting announcements coming soon regarding the Capstone Projects.

Every course will be available every month, so don't worry about missing a session. You can always come back next month.

Posted in Uncategorized | Leave a comment

Confession: I sometimes enjoy reading the fake journal/conference spam

I've spent a considerable amount of time setting up filters to avoid getting spam from fake journals and conferences. Unfortunately, they are exceptionally good at thwarting my defenses. This does not annoy me as much as I pretend because, secretly, I enjoy reading some of these emails. Here are three of my favorites.

1) Over-the-top robot:

It gives us immense pleasure to invite you and your research allies to submit a manuscript for the journal “REDACTED”. The expertise of you in the never ending field of Gene Technology is highly appreciable. The level of intricacy shown by you in your work makes us even more proud, and we believe that your works should be known to mankind of science.

2) Sarcastic robot?

First of all, congratulations on the publication of your highly cited original article < The human colon cancer methylome shows similar hypo- and hypermethylation at conserved tissue-specific CpG island shores > in the field of colon cancer, which has been cited more than 1 times and is in the world's top one percent of papers. Such high number of citations reflects the high quality and influence of your paper.

3) Intimidating robot:

This is Rocky.... Recently we have mailed you about the details of the conference. But we still have not received your response. So today we contact you again.

NB: Although I am joking in this post, I do think these fake journals and conferences are a very serious problem. The fact that they are still around means enough money (mostly taxpayer money) is being spent to keep them in business. If you want to learn more, this blog does a good job on reporting on them and includes a list of culprits.

Posted in Uncategorized | Tagged | 3 Comments

Picking a (bio)statistics thesis topic for real world impact and transferable skills

One of the things that was hardest for me in graduate school was starting to think about my own research projects and not just the ideas my advisor fed me. I remember that it was stressful because I didn't quite know where to start. After having done this for a while and particularly after having read a bunch of papers by people who are way more successful than I am, I have come to the following algorithm as a means for finding a topic that will have real world impact and also give you skills to take on new problems in a flexible way.

  1.  Find a scientific problem that hasn't been solved with data (by far hardest part)
  2. Define your metric for success
  3.  Collect data/partner up with someone with data for that problem.
  4.  Create a good solution to the problem
  5.  Only invent new methods if you must
  6. (Optional) Write software and document the hell out of it
  7. (Optional) Respond to users and update as needed
  8. Don't get (meanly) competitive

The first step is definitely the most important and the hardest. The balance is between big important problems that lots of people are working on but where the potential for innovation is low and small detailed problems where you won't have serious competition but you will have limited impact. In general good ways to find scientific problems are the following. (1) Find close and real scientific/applications collaborators. Not real like you talk to them once a month, real like you have a weekly meeting, you try to understand how their data are collected or generated and you ask them specifically what problems prevent them from doing their job well then solve those problems. (2) You come up with a scientific question you have on your own. In mature research areas like genomics this requires a huge amount of reading to know what people have done before you, or to at least know what new technologies/data are becoming available. (3) You you could read a ton of papers and find one that produces interesting data you think could answer a question the authors haven't asked. In general, the key is to put the problem first, before you even think about how to quantify or answer the question.

Next you have to define your metric for success. This metric should be scientific. You should try to say, "if I could predict x at 70% accuracy I could solve scientific problem y" or "if I could infer the relationship between x and y I would know something about z". The metric should be compared to the scientific standards in the field. As an example, screening tests for the general population often must be 99% sensitive and specific (or more) due to low prevalence. But in a sub population, sensitivity and specificity of 70% or 80% may be really useful.

Then you find the data. Here the key quote comes from Tukey:

The data may not contain the answer. The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data.

My experience is that when you start with the problem first, the data are often hard to come by, have quirks, or are not quite right for the problem you want to solve. Generating the perfect data is often very expensive, so a huge amount of the effort you will spend is either (a) generating the perfect data or (b) determining if the data you collected is "good enough" to answer the question. One important point here is that knowing when you have failed is the entire name of the game here. If you get stuck once, you should try again. If you get stuck 100 times, it might be time to look for a different data set or figure out why the problem is unanswerable with current data. Incidentally, this is the most difficult part of the approach I'm proposing for coming up with topics. Failure is both likely and frequent, but that is a good thing when you are in grad school if you can learn from it and learn to predict when you are going to fail.

Since you've identified a problem that hasn't been solved before in step 1, the first thing to try is to come up with a sensible solution using only the methods that already exist. In many cases, these existing methods will work pretty well. If they don't, invent only as much statistical methodology and theory as you need to solve the problem. If you invent something new here, you should try it out on simple simulated examples and complex data where you either know the answer or can perform cross-validation/replication analysis.

At this point, if you have a basic solution to the problem, even if it is just the t-test, you are in great shape! You have solved a problem that is new and you are ready to publish. If you have invented some methods along the way, publish those, too!

In some cases the problems you solve will be focused on an area where lots of other people can collect similar data to answer similar problems. In this case, your most direct route to maximum impact is to write simple, usable, and really well documented software other people can use. Write it in R, make it free, give it a vignette and advertise it! If people use your software they will send you bug reports, patches, typos, fixes, and wish lists of things they want your software to do. The more you help people and respond, the more your software will get used and the more impact your method will have.

Step 8 is often the hardest part. If you do something interesting, you will have a ton of competitors. People will write better and more precise methods down and will "beat" your method. That's ok, in fact it is good! The more people that compare to your approach, the more you know you picked a good problem. In some cases, people will genuinely create better methods than you will. Learn from them and make your methods and software better. But try not to be upset that they wrote a paper about how their idea is so much better than yours, it is a high compliment they thought your idea was worth comparing to. This is one the author of the post hasn't nailed down perfectly but I think the more you can do it the happier you will be.

The best part of this algorithm is that it gives you the problem first focus that will make it easy to transition if you do a postdoc with a different kind of data, or move to industry, or start with new collaborators.

Posted in Uncategorized | 1 Comment

Correlation does not imply causation (parental involvement edition)

The New York Times recently published an article on education titled "Parental Involvement Is Overrated". Most research in this area supports the opposite view, but the authors claim that "evidence from our research suggests otherwise".  Before you stop helping your children understand long division or correcting their grammar, you should learn about one of the most basic statistical concepts: correlation does not imply causation. The first two chapters of this very popular text book describes the problem and even Khan Academy has a class on it. As several of the commenters in the  NYT article point out, the authors fail to make this distinction.

To illustrate the problem, imagine you want to know how effective tutoring is for students in a math class you are teaching.  So you compare the test scores of students that received tutoring to those that don't. You find that receiving tutoring is correlated with lower test scores. So do you conclude that tutoring causes lower grades? Of course not!  In this particular case we are confusing cause and effect: students that have trouble with math are much more likely to seek out tutoring and this is what drives the observed correlation. With that example in mind,  consider this quote from the New York Times article:

When we examined whether regular help with homework had a positive impact on children’s academic performance, we were quite startled by what we found. Regardless of a family’s social class, racial or ethnic background, or a child’s grade level, consistent homework help almost never improved test scores or grades.... Even more surprising to us was that when parents regularly helped with homework, kids usually performed worse.

A first question we would ask here is: how do we know that the children's performance would not have been even worse had they not received help? I imagine the authors made use of controls: we compare the group that received the treatment (regular help with homework) to a control group that did not. But this brings up a more difficult question: how do we know that the treatment and control groups are comparable?

In a randomized controlled experiment, we would take a group of kids and randomly assign each one to the treatment group (will be helped with their homework) or control group (no help with homework). By doing this we can use probability calculations to determine the range of differences we expect to see by chance when the treatment has no effect.  Note that by chance one group may end up with a few more "better testers" than the other. However, if we see a big enough difference that can't be explained by chance, then the alternative that the treatment is responsible for the observed differences becomes more believable.

Given all the prior research (and common sense) suggesting that parent involvement, in its many manifestations, is in fact helpful to students, many would consider it unethical to run a randomized controlled trial on this issue (you would knowingly hurt the control group). Therefore, the authors are left with no choice than to use an observational study to reach their conclusions. In this case, we have no control over who receives help and who doesn't. Kids that require regular help with their homework are different in many ways to kids that don't, even after correcting for all the factors mentioned. For example, one can envision how kids that have a mediocre teacher or have trouble with tests are more likely to be in the treatment group, while kids who naturally test well or go to schools that offer in-school tutoring are more likely to be in the control group.

I am not an expert on education, but as a statistician I am skeptical of the conclusions of this data-driven article.  In fact, I would  recommend parents actually do get involved early on by, for example, teaching children that correlation does not imply causation.

Note that I am not saying that observational studies are uninformative. If properly analyzed, observational data can be very valuable. For example, the data supporting smoking as a cause of lung cancer is all observational. Furthermore, there is an entire subfield within statistics (referred to as causal inference) that develops methodologies to deal with observational data. But unfortunately, observational data are commonly misinterpreted.

Posted in Uncategorized | Leave a comment

The #rOpenSci hackathon #ropenhack

Editor's note: This is a guest post by Alyssa Frazee, a graduate student in the Biostatistics department at Johns Hopkins and a participant in the recent rOpenSci hackathon. 

Last week, I took a break from my normal PhD student schedule to participate in a hackathon in San Francisco. The two-day event was hosted by rOpenSci, an organization committed to developing R tools for open science. Working with several wonderful people from the R community was inspiring, humbling, and incredibly fun. So many great things happened in a two-day whirlwind: it would be impossible now to capture the whole thing in a narrative that would do it justice. So instead of a play-by-play, here are some of the quotes from the event that I've recently been reflecting on:

"The enemy isn't R, Python, or Julia. The enemy is closed-source science."

There have been some lively internet debates recently about mathematical and scientific computing languages. While conversations about these languages are interesting and necessary, the forest often gets lost for the trees: in the end, we are here to do good science, and we should use whatever makes that easiest. We should build strong, collaborative communities, both within languages and across them. A closed-source science mentality hinders this kind of collaboration. I thought one of the hackathon projects, an R kernel for the iPython notebook, especially exemplified a commitment to open science and to cross-language collaboration. It was so awesome to spend two days with R folks like this who genuinely enjoy working together, in any language, to make scientific computing better.

"Pair debugging is fun!"

This quote perfectly captures one of my favorite things about hackathons: genuine group work! During my time in graduate school, I've done most of my programming solo. I think this is the nature of getting a PhD: the projects have to be yours, and all the other PhD students are working on their solo projects. So I really enjoyed the hackathon because it facilitated true pair/group work: two or more peers working on the same project, in the same room, at the same time. I like this work strategy for many reasons:

•          The rate at which I learn new things is high, since it's so easy to ask a question. Lots of time is saved by not having to sift through internet search results.

•          Sometimes I find solo debugging to be pretty painful. But I think pair debugging is fun and satisfying: it's like an inspirational sports movie. It's you and me, the ragtag underdogs, against the computer, the evil bully from across town. Relatedly, the sweet sweet taste of victory is also shared.

•          It's easier to stay focused on the task at hand. I'm not as easily distracted by email/Twitter/Facebook/blogs/the rest of the internet when I'm not coding alone.

My academic sister, Hilary, and I did a good amount of pair debugging during the hackathon, and I kept finding myself thinking "I wish this would have been possible while we were both grad students!" I think we both had lots of fun working together. For a short discussion of more fun aspects of pairing, here's a blog post I like. At the rOpenSci hackathon in particular, group work was especially awesome because we could ask questions in person to people who have written the libraries our projects depend on, or to RStudio developers, or to GitHub employees, or to potential users of the projects. Just some of the many joys of having lots of talented, friendly R programmers all in the same space!

"Want me to write some unit tests for your unit tests?"

During the hackathon, I primarily worked on a unit-testing package called testdat. Testdat provides functions that check for and fix common problems with tabular data, like UTF-8 characters and inconsistent missing data codes, with the overall goal of making data processing/cleaning more reproducible. The project was really good for a two-day hackathon, since it was small enough to almost finish in two days, and it was very modular: one person worked on the missing data checking functions, another worked on UTF-8 checking, a third wrote the tests for the finished functions (unit tests for unit tests!), etc. Also, it didn't require a lot of background knowledge in a specific subject area or a deep dive into an existing codebase: all it required were some coding skills and perhaps a frustrating experience with messy data in the past (for motivation).

Finding an appropriate project to work on was probably my biggest challenge at this hackathon. I spent the summer at Hacker School, where the days were structured similarly to how they were at the rOpenSci hackathon: there wasn't really any structure. In both scenarios, the minimal structure was intentional. Lots of great collaborative work can happen with a few free days of hacking. But with two free days at the hackathon (versus Hacker School's 50), it was much more important to choose a good project quickly and get coding. One way to do this would have been to arrive at the hackathon with a small project in hand (many people did this). My strategy, however, was to chat with a few different project groups for the first hour or two on day 1, and then stick with one of those groups for the rest of the time. It worked well -- as I mentioned above, testdat was a great project -- but I did feel some time pressure (internally!) to choose a small project quickly.

For a look at some of the other hackathon projects, check out rOpenSci's GitHub page, the hackathon GitHub page, project-specific posts on the rOpenSci blog, or the hackathon's live-tweet hashtag, #ropenhack.

"Why are there so many Minnesotans here?"

There were at least four hackathon attendees (out of 35-40 total) that either currently live in or hail from Minnesota. Talk about overrepresentation! We are everywhere.

"I love my job."

I'm a late-stage PhD student, so the job market is looming closer with every passing day. When I meet new people working in statistics, genomics, data science, or another related field, I like to ask them whether they like their current work, how it compares to other jobs they've had, etc. Hackathon attendees had all kinds of jobs: academic researcher, industry scientist, freelancer, student, etc. The majority of the responses to my inquiries about how they liked their work was "I love it." The situation made the job market seem exciting, rather than intimidating: among the hackathon attendees and folks from the SF data science community that hung out with us for a dinner, the jobs themselves were pretty heterogeneous, but the general enjoyment of the work seemed consistently high.

"What's the future of R?"

I suppose we should have known that existential questions like this would come up when 40 passionate R people spend two straight days together. Our discussion of the future of R didn't really yield any definitive answers or predictions, but I think we have big dreams for what R's future will look like: vibrant, open, collaborative, and scientifically driven. If the hackathon atmosphere was any indication of R's future, I'm feeling pretty optimistic about where things are going.

In closing: we're really grateful to the people and organizations that made the hackathon possible: rOpenSci, Karthik Ram, GitHub, the Sloan Foundation, and F1000 Research. Thanks for strengthening the R community, giving us the chance to meet each other outside of the internet, and helping us have a great time doing R, for science, together!

Posted in Uncategorized | Leave a comment

Writing good software can have more impact than publishing in high impact journals for genomic statisticians

Every once in a while we see computational papers published in science journals with high impact factors.  Genomics related methods appear quite often in these journals. Several of my junior colleagues express frustration that all their papers get rejected from these journals. I tell them that the same is true for most of my papers and remind them of these examples:

Method Journal Year #Citations
PLINK AJHG 2007 6481
Bioconductor Genome Biology 2004 5973
RMA Biostatistics 2003 5674
limma SAGMB 2004 5637
quantile normalization Bioinformatics 2003 4646
Bowtie Genome Biology 2009 3849
BWA Bioinformatics 2009 3327
Loess normalization NAR 2002 3313
qvalues JRSS-B 2002 2758
tophat Bioinformatics 2008 1868
vsn Bioinformatics 2002 1398
GCRMA JASA 2004 1397
MACS Genome Biology 2008 1277
deseq Genome Biology 2010 1264
CBS Biostatistics 2004 1051
R/qtl Bioinformatics 2003 1027

Let me know of other examples in the comments.
update: I added one more to the list.

Posted in Uncategorized | 4 Comments

This is how an important scientific debate is being used to stop EPA regulation

Environmental regulation in the United States has protected human health for over 40 years. Since the Clean Air Act was enacted in 1970, levels of outdoor air pollution have dropped dramatically, changing the landscape of once heavily-polluted cities like Los Angeles and Pittsburgh. A 2011 cost-benefit analysis conducted by the U.S. Environmental Protection Agency estimated that the 1990 amendments to the CAA prevented 160,000 deaths and 13 million lost work days in the year 2010 alone. They estimated that the monetary benefits of the CAA were 30 times greater than the costs of implementing the regulations.

The benefits of environmental regulations like the CAA significantly outweigh their costs. But there are still costs, and those costs must be borne by someone. The burden is usually put on the polluters, such as the automobile and power generation industries, which have long fought any notion of air pollution regulation as a threat to their existence. Initially, as air pollution and health studies were still emerging, opponents of regulation often challenged the science itself, claiming flaws in the methodology, the measurements, or the interpretation. But when study after study demonstrated a connection between outdoor air pollution and a variety of health problems, it became increasingly difficult for critics to mount a credible challenge. Lawsuits are another tactic used by industry, with one case brought by the American Trucking Association going all the way to the U.S. Supreme Court.

The latest attack comes from the House of Representatives in the form of the Secret Science Reform Act, or H.R. 4102. In summary, the proposed bill requires that every scientific paper cited by the EPA to justify a new rule or regulation needs to be reproducible. What exactly does this mean? To answer that question we need to take a brief diversion into some recent important developments in statistical science.

The idea behind reproducibility is simple. All the data used in a scientific paper and all the computer code used to analyze that data should be made available to other researchers and the public. It may be surprising that much of this data actually isn’t already available. The primary reason most data isn’t available is because, until recently, most people didn’t ask scientists for their data. The data was often small and collected for a specific purpose so other scientists and the general public just weren’t that interested. If a scientist were interested in checking the truth of a claim, she could simply repeat the experiment in her lab to see if the claim could be replicated.

The nature of science has changed quickly over the last three decades. There has been an explosion of data, fueled by the decreasing cost of data collection technologies and computing power. At the same time, increased access to sophisticated computing power has let scientists conduct more sophisticated analyses on their data. The massive growth in data and the increasing sophistication of the analyses has made communicating what was done in a scientific study more complicated.

The traditional medium of journal publications has proven to be inadequate for describing the important details of a data analysis. As a result, it has been said that scientific articles are merely the “advertising” for the research that was conducted. The real research is buried in the data and the computer code actually used to compute the results. Journals have traditionally not required that data or computer code be published along with papers. As a result, many important details may be lost and prevent key studies from being fully reproducible.

The explosion of data has also made completely replicating a large study by an independent scientist much more difficult and costly. A large study is expensive to conduct in the first place; there is usually little appetite or funding to repeat it.  The result is that much of published scientific research cannot be reproduced by other scientists because the necessary data and analytic details are not available to others.

The scientific community is currently engaged in a debate over how to improve reproducibility across all of science. You might be tempted to ask, why not just share the data? Even if we could get everyone to agree with that in principle, it’s not clear how to do it.

Imagine if everyone in the U.S. decided we were all going to share our movie collections, and suppose for the sake of this example that the movie industry did not object. How would it work? Numerous questions immediately arise. Where would all these movies be stored? How would they be transferred from one person to another? How would I know what movies everyone else had? If my movies are all on the old DVD format, do I need to convert them to some other format before I can share? My Internet connection is very slow, how can I download a 3 hour HD movie? My mother doesn’t use computers much, but she has a great movie collection that I think others should have access to. What should she do? And who is going to pay for all of this? While each question may have a reasonable answer, it’s not clear what is the optimal combination and how you might scale it to the entire country.

Some of you may recall that the music industry had a brilliant sharing service that essentially allowed everyone to share their music collections. It was called Napster. Napster solved many of the problems raised above except for one -- they failed to survive. So even when a decent solution is found, there’s no guarantee that it will always be there.

As outlandish as this example may seem, minor variations on these exact questions come up when we discuss how to share scientific data. The volume of data being produced today is enormous and making all of it available to everyone is not an easy task. That’s not to say it is impossible. If smart people get together and work constructively, it is entirely possible that a reasonable approach could be found. But at this point, a credible long-term solution has yet to emerge.

This brings us back to the Secret Science Reform Act. The latest tactic by opponents of air quality regulation is to force the EPA to ensure that all of the studies that it cites to support new regulations are reproducible. A cursory reading of the bill gives the impression that the sponsors are genuinely concerned about making science more transparent to the public. But when one reads the language of the bill in the context of ongoing discussions about reproducibility, it becomes clear that the sponsors of the bill have no such goal in mind. The purpose of H.R. 4102 is to prevent the Environmental Protection Agency from proposing new regulations.

The EPA develops rules and regulations on the basis of scientific evidence. For example, the Clean Air Act requires EPA to periodically review the scientific literature for the latest evidence on the health effects of air pollution. The science the EPA considers needs to be published in peer-reviewed journals. This makes the EPA a key consumer of scientific knowledge and it uses this knowledge to make informed decisions about protecting public health. What the EPA is not is a large funder of scientific studies. The entire budget for the Office of Research and Development at EPA is roughly $550 million (fiscal 2014), or less than 2 percent of the budget for the National Institutes of Health (about $30 billion for fiscal 2014). This means EPA has essentially no influence over the scientists behind many of the studies it cites because it funds very few of those studies. The best the EPA can do is politely ask scientists to make their data available. If a scientist refuses, there’s not much the EPA can use as leverage.

The latest controversy to come up involves the Harvard Six Cities study published in 1993. This landmark study found a large difference in mortality rates comparing cities with high and low air pollution, even after adjusting for smoking and other factors. The House committee has been trying to make the data for this study publicly available so that it can ensure that regulations are “backed by good science”. However, the Committee has either forgotten or never knew that this particular study has been fully reproduced by independent investigators. In 2005, independent investigators found that they were “...able to reproduce virtually all of the original numerical results, including the 26 percent increase in all-cause mortality in the most polluted city (Stubenville, OH) as compared to the least polluted city (Portage, WI). The audit and validation of the Harvard Six Cities Study conducted by the reanalysis team generally confirmed the quality of the data and the numerical results reported by the original investigators.”

It would be hard to find an air pollution study that has been subject to more scrutiny than the Six Cities studies. Even if you believed the Six Cities study was totally wrong, its original findings have been replicated numerous times since its publication, with different investigators, in different populations, using different analysis techniques, and in different countries. If you’re looking for an example where the science was either not reproducible or not replicable, sorry, but this is not your case study.

Ultimately, it is clear that the sponsors of this bill are cynically taking advantage of a genuine (but difficult) scientific debate over reproducibility to push a political agenda. Scientists are in agreement that reproducibility is important, but there is no consensus yet on how to make it happen for everyone. By forcing the EPA to ensure reproducibility of the science on which it bases regulation, lawmakers are asking the EPA to solve a problem that the entire scientific community has yet to figure out. The end result of passing a bill like H.R. 4102 is that the EPA will be forced to stop proposing any new regulation, handing a major victory to opponents of air quality standards and dealing a major blow to public health in the U.S.

Posted in Uncategorized | 4 Comments