Simply Statistics


A thanksgiving dplyr Rubik's cube puzzle for you

Nick Carchedi is back visiting from DataCamp and for fun we came up with a dplyr Rubik's cube puzzle. Here is how it works. To solve the puzzle you have to make a 4 x 3 data frame that spells Thanksgiving like this:

To solve the puzzle you need to pipe this data frame in 

and pipe out the Thanksgiving data frame using only the dplyr commands arrange, mutate, slice, filter and select. For advanced users you can try our slightly more complicated puzzle:

See if you can do it this fast. Post your solutions in the comments and Happy Thanksgiving!


So you are getting crushed on the internet? The new normal for academics.

Roger and I were just talking about all the discussion around the Case and Deaton paper on death rates for middle class people. Andrew Gelman discussed it among many others. They noticed a potential bias in the analysis and did some re-analysis. Just yesterday an economist blogger wrote a piece about academics versus blogs and how many academics are taken by surprise when they see their paper being discussed so rapidly on the internet. Much of the debate comes down to the speed, tone, and ferocity of internet discussion of academic work - along with the fact that sometimes it isn't fully fleshed out.

I have been seeing this play out not just in the case of this specific paper, but many times that folks have been confronted with blogs or the quick publication process of f1000Research. I think it is pretty scary for folks who aren't used to "internet speed" to see this play out and I thought it would be helpful to make a few points.

  1. Everyone is an internet scientist now. The internet has arrived as part of academics and if you publish a paper that is of interest (or if you are a Nobel prize winner, or if you dispute a claim, etc.) you will see discussion of that paper within a day or two on the blogs. This is now a fact of life.
  2. The internet loves a fight. The internet responds best to personal/angry blog posts or blog posts about controversial topics like p-values, errors, and bias. Almost certainly if someone writes a blog post about your work or an f1000 paper it will be about an error/bias/correction or something personal.
  3. Takedowns are easier than new research and happen faster. It is much, much easier to critique a paper than to design an experiment, collect data, figure out what question to ask, ask it quantitatively, analyze the data, and write it up. This doesn't mean the critique won't be good/right it just means it will happen much much faster than it took you to publish the paper because it is easier to do. All it takes is noticing one little bug in the code or one error in the regression model. So be prepared for speed in the response.

In light of these three things, you have a couple of options about how to react if you write an interesting paper and people are discussing it - which they will certainly do (point 1), in a way that will likely make you uncomfortable (point 2), and faster than you'd expect (point 3). The first thing to keep in mind is that the internet wants you to "fight back" and wants to declare a "winner". Reading about amicable disagreements doesn't build audience. That is why there is reality TV. So there will be pressure for you to score points, be clever, be fast, and refute every point or be declared the loser. I have found from my own experience that is what I feel like doing too. I think that resisting this urge is both (a) very very hard and (b) the right thing to do. I find the best solution is to be proud of your work, but be humble, because no paper is perfect and thats ok. If you do the best you can , sensible people will acknowledge that.

I think these are the three ways to respond to rapid internet criticism of your work.

  • Option 1: Respond on internet time. This means if you publish a big paper that you think might be controversial  you should block off a day or two to spend time on the internet responding. You should be ready to do new analysis quickly, be prepared to admit mistakes quickly if they exist, and you should be prepared to make it clear when there aren't. You will need social media accounts and you should probably have a blog so you can post longer form responses. Github/Figshare accounts make it better for quickly sharing quantitative/new analyses. Again your goal is to avoid the personal and stick to facts, so I find that Twitter/Facebook are best for disseminating your more long form responses on blogs/Github/Figshare. If you are going to go this route you should try to respond to as many of the major criticisms as possible, but usually they cluster into one or two specific comments, which you can address all in one.
  • Option2 : Respond in academic time. You might have spent a year writing a paper to have people respond to it essentially instantaneously. Sometimes they will have good points, but they will rarely have carefully thought out arguments given the internet-speed response (although remember point 3 that good critiques can be faster than good papers). One approach is to collect all the feedback, ignore the pressure for an immediate response, and write a careful, scientific response which you can publish in a journal or in a fast outlet like f1000Research. I think this route can be the most scientific and productive if executed well. But this will be hard because people will treat that like "you didn't have a good answer so you didn't respond immediately". The internet wants a quick winner/loser and that is terrible for science. Even if you choose this route though, you should make sure you have a way of publicizing your well thought out response - through blogs, social media, etc. once it is done.
  • Option 3: Do not respond. This is what a lot of people do and I'm unsure if it is ok or not. Clearly internet facing commentary can have an impact on you/your work/how it is perceived for better or worse. So if you ignore it, you are ignoring those consequences. This may be ok, but depending on the severity of the criticism may be hard to deal with and it may mean that you have a lot of questions to answer later. Honestly, I think as time goes on if you write a big paper under a lot of scrutiny Option 3 is going to go away.

All of this only applies if you write a paper that a ton of people care about/is controversial. Many technical papers won't have this issue and if you keep your claims small, this also probably won't apply. But I thought it was useful to try to work out how to act under this "new normal".


How I decide when to trust an R package

One thing that I've given a lot of thought to recently is the process that I use to decide whether I trust an R package or not. Kasper Hansen took a break from trolling me on Twitter to talk about how he trusts packages on Github less than packages that are on CRAN and particularly Bioconductor.  A couple of points he makes that I think are very relevant. First, that having a package on CRAN/Bioconductor raises trust in that package:

The primary reason is because Bioc/CRAN demonstrate something about the developer's willingness to do the boring but critically important parts of package development like documentation, vignettes, minimum coding standards, and being sure that their code isn't just a rehash of something else. The other big point Kasper made was the difference between a repository - which is user oriented and should provide certain guarantees and Github - which is a developer platform and makes things easier/better for developers but doesn't have a user guarantee system in place.

This discussion got me thinking about when/how I depend on R packages and how I make that decision. The scenarios where I depend on R packages are:

  1. Quick and dirty analyses for myself
  2. Shareable data analyses that I hope are reproducible
  3. As dependencies of R packages I maintain

As you move from 1-3 it is more and more of a pain if the package I'm depending on breaks. If it is just something I was doing for fun, its not that big of a deal. But if it means I have to rewrite/recheck/rerelease my R package than that is a much bigger headache.

So my scale for how stringent I am about relying on packages varies by the type of activity, but what are the criteria I use to measure how trustworthy a package is? For me, the criteria are in this order:

  1. People prior 
  2. Forced competence
  3. Indirect data

I'll explain each criteria in a minute, but the main purpose of using these criteria is (a) to ensure that I'm using a package that works and (b) to ensure that if the package breaks I can trust it will be fixed or at least I can get some help from the developer.

People prior

The first thing I do when I look at a package I might depend on is look at who the developer is. If that person is someone I know has developed widely used, reliable software and who quickly responds to requests/feedback then I immediately trust the package. I have a list of people like Brian, or Hadley, or Jenny, or Rafa, who could post their package just as a link to their website and I would trust it. It turns out almost all of these folks end up putting their packages on CRAN/Bioconductor anyway. But even if they didn't I assume that the reason is either (a) the package is very new or (b) they have a really good reason for not distributing it through the normal channels.

Forced competence

For people who I don't know about or whose software I've never used, then I have very little confidence in the package a priori. This is because there are a ton of people developing R packages now with highly variable levels of commitment to making them work. So as a placeholder for all the variables I don't know about them, I use the repository they choose as a surrogate. My personal prior on the trustworthiness of a package from someone I don't know goes something like:

Screen Shot 2015-11-06 at 1.25.01 PM

This prior is based on the idea of forced competence. In general, you have to do more to get a package approved on Bioconductor than on CRAN (for example you have to have a good vignette) and you have to do more to get a package on CRAN (pass R CMD CHECK and survive the review process) than to put it on Github.

This prior isn't perfect, but it does tell me something about how much the person cares about their package. If they go to the work of getting it on CRAN/Bioc, then at least they cared enough to document it. They are at least forced to be minimally competent - at least at the time of submission and enough for the packages to still pass checks.

Indirect data

After I've applied my priors I then typically look at the data. For Bioconductor I look at the badges, like how downloaded it is, whether it passes the checks, and how well it is covered by tests. I'm already inclined to trust it a bit since it is on that platform, but I use the data to adjust my prior a bit. For CRAN I might look at the download stats provided by Rstudio. The interesting thing is that as John Muschelli points out, Github actually has the most indirect data available for a package:

If I'm going to use a package that is on Github from a person who isn't on my prior list of people to trust then I look at a few things. The number of stars/forks/watchers is one thing that is a quick and dirty estimate of how used a package is. I also look very carefully at how many commits the person has submitted to both the package in question and in general all other packages over the last couple of months. If the person isn't actively developing either the package or anything else on Github, that is a bad sign. I also look to see how quickly they have responded to issues/bug reports on the package in the past if possible. One idea I haven't used but I think is a good one is to submit an issue for a trivial change to the package and see if I get a response very quickly. Finally I look and see if they have some demonstration their package works across platforms (say with a travis badge). If the package is highly starred, frequently maintained, all issues are responded to and up-to-date, and passes checks on all platform then that data might overwhelm my prior and I'd go ahead and trust the package.


In general one of the best things about the R ecosystem is being able to rely on other packages so that you don't have to write everything from scratch. But there is a hard balance to strike with keeping the dependency list small. One way I maintain this balance is using the strategy I've outlined to worry less about trustworthy dependencies.


Faculty/postdoc job opportunities in genomics across Johns Hopkins

It's pretty exciting to be in genomics at Hopkins right now with three new Bloomberg professors in genomics areas, a ton of stellar junior faculty, and a really fun group of students/postdocs. If you want to get in on the action here is a non-comprehensive list of great opportunities.

Faculty Jobs

Job: Multiple tenure track faculty positions in all areas including in genomics
Department:  Biostatistics
To apply:
Deadline: Review ongoing

Job: Tenure track position in data intensive biology
Department:  Biology
To apply
Deadline: Nov 1st and ongoing

Job: Tenure track positions in bioinformatics, with focus on proteomics or sequencing data analysis
Department:  Oncology Biostatistics
To apply
Deadline: Review ongoing


Postdoc Jobs

Job: Postdoc(s) in statistical methods/software development for RNA-seq
Employer:  Jeff Leek
To apply: email Jeff (
Deadline: Review ongoing

Job: Data scientist for integrative genomics in the human brain (MS/PhD)
Employer:  Andrew Jaffe
To apply: email Andrew (
Deadline: Review ongoing

Job: Research associate for genomic data processing and analysis (BA+)
Employer:  Andrew Jaffe
To apply: email Andrew (
Deadline: Review ongoing

Job: PhD developing scalable software and algorithms for analyzing sequencing data
Employer:  Ben Langmead
To apply:
Deadline: See site

Job: Postdoctoral researcher developing scalable software and algorithms for analyzing sequencing data
Employer:  Ben Langmead
To apply:  email Ben (
Deadline: Review ongoing

Job: Postdoctoral researcher developing algorithms for challenging problems in large-scale genomics whole-genome assenbly, RNA-seq analysis, and microbiome analysis
Employer:  Steven Salzberg
To apply:  email Steven (
Deadline: Review ongoing

Job: Research associate for genomic data processing and analysis (BA+) in cancer
Employer:  Luigi Marchionni (with Don Geman)
To apply:  email Luigi (
Deadline: Review ongoing

Job: Postdoctoral researcher developing algorithms for biomarkers development and precision medicine application in cancer
Employer:  Luigi Marchionni (with Don Geman)
To apply:  email Luigi (
Deadline: Review ongoing

Job:Postdoctoral researcher developing methods in machine learning, genomics, and regulatory variation
Employer:  Alexis Battle
To apply:  email Alexis (
Deadline: Review ongoing

Job: Postdoctoral fellow with interests in biomarker discovery for Alzheimer’s disease
Employer:  Madhav Thambisetty / Ingo Ruczinski
To apply:
Deadline: Review ongoing

Job: Postdoctoral positions for research in the interface of statistical genetics, precision medicine and big data
Employer:  Nilanjan Chatterjee
To apply:
Deadline: Review ongoing

Job: Postdoctoral research developing algorithms and software for time course pattern detection in genomics data
Employer:  Elana Fertig
To apply:  email Elana (
Deadline: Review ongoing

Job: Postdoctoral fellow to develop novel methods for large-scale DNA and RNA sequence analysis related to human and/or plant genetics, such as developing methods for discovering structural variations in cancer or for assembling and analyzing large complex plant genomes.
Employer:  Mike Schatz
To apply:  email Mike (
Deadline: Review ongoing


We are all always on the hunt for good Ph.D. students. At Hopkins students are admitted to specific departments. So if you find a faculty member you want to work with, you can apply to their department. Here are the application details for the various departments admitting students to work on genomics:





The statistics identity crisis: am I really a data scientist?






Tl;dr: We will host a Google Hangout of our popular JSM session October 30th 2-4 PM EST. 


I organized a session at JSM 2015 called "The statistics identity crisis: am I really a data scientist?" The session turned out to be pretty popular:

but it turns out not everyone fit in the room:

Thankfully, Steve Pierson at the ASA had the awesome idea to re-run the session for people who couldn't be there. So we will be hosting a Google Hangout with the following talks:

'Am I a Data Scientist?': The Applied Statistics Student's Identity CrisisAlyssa Frazee, Stripe
How Industry Views Data Science Education in Statistics DepartmentsChris Volinsky, AT&T
Evaluating Data Science Contributions in Teaching and ResearchLance Waller, Emory University
Teach Data Science and They Will ComeJennifer Bryan, The University of British Columbia

You can watch it on Youtube or Google Plus. Here is the link:

The session will be held October 30th (tomorrow!) from 2-4PM EST. You can watch it live and discuss the talks using the hashtag #JSM2015 or you can watch later as the video will remain on Youtube.


A glass half full interpretation of the replicability of psychological science

tl;dr: 77% of replication effects from the psychology replication study were in (or above) the 95% prediction interval based on the original effect size. This isn't perfect and suggests (a) there is still room for improvement, (b) the scientists who did the replication study are pretty awesome at replicating, (c) we need a better definition of replication that respects uncertainty but (d) the scientific sky isn't falling. We wrote this up in a paper on arxiv; the code is here. 

A week or two ago a paper came out in Science on Estimating the reproducibility of psychological science. The basic behind the study was to take a sample of studies that appeared in a particular journal in 2008 and try to replicate each of these studies. Here I'm using the definition that reproducibility is the ability to recalculate all results given the raw data and code from a study and replicability is the ability to re-do the study and get a consistent result. 

The paper is pretty incredible and the authors did an amazing job of going back to the original sources and trying to be faithful to the original study designs. I have to admit when I first heard about the study design I was incredibly pessimistic about the results (I suppose grouchy is a natural default state for many statisticians –especially those with sleep deprivation). I mean 2008 was well before the push toward reproducibility had really taken off (Biostatistics was one of the first journals to adopt a policy on reproducible research and that didn't happen until 2009). More importantly, the student researchers from those studies had possibly moved on, study populations may change, there could be any number of minor variations in the study design and so forth. I thought the chances of getting any effects in the same range was probably pretty low. 

So when the results were published I was pleasantly surprised. I wasn’t the only one:

But that was definitely not the prevailing impression that the paper left on social and mass media. A lot of the discussion around the paper focused on the idea that only 36% of the studies had a p-value less than 0.05 in both the original and replication study. But many of the sample sizes were small and the effects were modest. So the first question I asked myself was, "Well what would we expect to happen if we replicated these studies?" The original paper measured replicability in several ways and tried hard to calibrate expected coverage of confidence intervals for the measured effects.

With Roger and Prasad we tried a little different approach. We estimated the 95% prediction interval for the replication effect given the original effect size.



72% of the replication effects were within the 95% prediction interval and 2 were above the interval (showed a stronger signal in replication in than predicted from original study). This definitely shows that there is still room for improvement in replication of these studies - we would expect 95% of the effects to fall into the 95% prediction interval. But at least my opinion is that 72% (or 77% if you count the 2 above the P.I.) of studies falling in the prediction interval is (a) not bad and (b) a testament to the authors of the reproducibility paper and their efforts to get the studies right.

An important point here is that replication and reproducibility aren't the same thing. When reproducing a study we expect the numbers and figures to be exactly the same. But a replication involves recollection of data and is subject to variation and so we don't expect the answer to be exactly the same in the replication. This is of course made more confusing by regression to the mean, publication bias, and the garden of forking paths.  Our use of a prediction interval measures both the variation expected in the original study and in the replication. One thing we noticed when re-analyzing the data is how many of the studies had very low sample sizes. samplesize_figure_nofilter


Sample sizes were generally bigger in the replication, but often very low regardless. This makes it more difficult to disentangle what didn't replicate from what is just expected variation for a small sample size study.  The point remains whether those small studies should be trusted in general, but for the purposes of measuring replication it makes the problem more difficult.

One thing I have been thinking about a lot and this study drove home is that if we are measuring replication we need a definition that incorporates uncertainty directly. Suppose that you collect a data set D0 from an original study and  D1 from a replication. Then replication means that the data from a study replicates if D0 ~ F and D1 ~ F. Informally, if the data are generated from the same distribution in both experiments then the study replicates. To get an estimate you apply a pipeline to the data set to get an estimate e0 = p(D0). If the study is also reproducible than p() is the same for both studies and p(D0) ~ G and p(D1) ~ G, subject to some conditions on p(). 

One interesting consequence of this definition is that each complete replication data set represents only a single data point for measuring replication. To measure replication with this definition you either need to make assumptions about the data generating distribution for D0 and D1 or you need to perform a complete replication of a study many times to determine if it replicates. However, it does mean that we can define replication even for studies with very small number of replicates as the data generating distribution may be arbitrarily variable in each case.

Regardless of this definition I was excited that the OSF folks did the study and pulled it off as well as they did and was a bit bummed about the most common  reaction. I think there is an easy narrative that "science is broken" which I think isn't a positive thing for a number of reasons. I love the way that {reproducibility/replicability/open science/open publication} are becoming more and more common, but often think we fall into the same trap in wanting to report these results as clear cut as we do when reporting exaggerations or oversimplifications of scientific discoveries in headlines. I'm excited to see how these kinds of studies look in 10 years when Github/open science/pre-prints/etc. are all the standards.


The Leek group guide to writing your first paper

I have written guides on reviewing papers, sharing data,  and writing R packages. One thing I haven't touched on until now has been writing papers. Certainly for me, and I think for a lot of students, the hardest transition in graduate school is between taking classes and doing research.

There are several hard parts to this transition including trying to find a problem, trying to find an advisor, and having a ton of unstructured time. One of the hardest things I've found is knowing (a) when to start writing your first paper and (b) how to do it. So I wrote a guide for students in my group:

On how to write your first paper. It might be useful for other folks as well so I put it up on Github. Just like with the other guides I've written this is a very opinionated (read: doesn't apply to everyone) guide. I also would appreciate any feedback/pull requests people have.


Interview with COPSS award Winner John Storey



Editor's Note: We are again pleased to interview the COPSS President's award winner. The COPSS Award is one of the most prestigious in statistics, sometimes called the Nobel Prize in statistics. This year the award went to John Storey who also won the Mortimer Spiegelman award for his outstanding contribution to public health statistics.  This interview is a particular pleasure since John was my Ph.D. advisor and has been a major role model and incredibly supportive mentor for me throughout my career. He also did the whole interview in markdown and put it under version control at Github so it is fully reproducible. 

SimplyStats: Do you consider yourself to be a statistician, data scientist, machine learner, or something else?

JS: For the most part I consider myself to be a statistician, but I’m also very serious about genetics/genomics, data analysis, and computation. I was trained in statistics and genetics, primarily statistics. I was also exposed to a lot of machine learning during my training since Rob Tibshirani was my PhD advisor. However, I consider my research group to be a data science group. We have the Venn diagram reasonably well covered: experimentalists, programmers, data wranglers, and developers of theory and methods; biologists, computer scientists, and statisticians.

SimplyStats: How did you find out you had won the COPSS Presidents’ Award?

JS: I received a phone call from the chairperson of the awards committee while I was visiting the Department of Statistical Science at Duke University to give a seminar. It was during the seminar reception, and I stepped out into the hallway to take the call. It was really exciting to get the news!

SimplyStats: One of the areas where you have had a big impact is inference in massively parallel problems. How do you feel high-dimensional inference is different from more traditional statistical inference?

JS: My experience is that the most productive way to approach high-dimensional inference problems is to first think about a given problem in the scenario where the parameters of interest are random, and the joint distribution of these parameters is incorporated into the framework. In other words, I first gain an understanding of the problem in a Bayesian framework. Once this is well understood, it is sometimes possible to move in a more empirical and nonparametric direction. However, I have found that I can be most successful if my first results are in this Bayesian framework.

As an example, Theorem 1 from Storey (2003) Annals of Statistics was the first result I obtained in my work on false discovery rates. This paper first appeared as a technical report in early 2001, and the results spawned further work on a point estimation approach to false discovery rates, the local false discovery rate, q-value and its application to genomics, and a unified theoretical framework.

Besides false discovery rates, this approach has been useful in my work on the optimal discovery procedure as well as surrogate variable analysis (in particular, Desai and Storey 2012 for surrogate variable analysis).  For high-dimensional inference problems, I have also found it is important to consider whether there are any plausible underlying causal relationships among variables, even if causal inference in not the goal. For example, causal model considerations provided some key guidance in a recent paper of ours on testing for genetic associations in the presence of arbitrary population structure. I think there is a lot of insight to be gained by considering what is the appropriate approach for a high-dimensional inference problem under different causal relationships among the variables.

SimplyStats: Do you have a process when you are tackling a hard problem or working with students on a hard problem?

JS: I like to work on statistics research that is aimed at answering a specific scientific problem (usually in genomics). My process is to try to understand the why in the problem as much as the how. The path to success is often found in the former. I try first to find solutions to research problems by using simple tools and ideas. I like to get my hands dirty with real data as early as possible in the process. I like to incorporate some theory into this process, but I prefer methods that work really well in practice over those that have beautiful theory justifying them without demonstrated success on real-world applications. In terms of what I do day-to-day, listening to music is integral to my process, for both concentration and creative inspiration: typically King Crimson or some variant of metal or ambient – which Simply Statistics co-founder Jeff Leek got to endure enjoy for years during his PhD in my lab.

SimplyStats: You are the founding Director of the Center for Statistics and Machine Learning at Princeton. What parts of the new gig are you most excited about?

JS: Princeton closed its Department of Statistics in the early 1980s. Because of this, the style of statistician and machine learner we have here today is one who’s comfortable being appointed in a field outside of statistics or machine learning. Examples include myself in genomics, Kosuke Imai in political science, Jianqing Fan in finance and economics, and Barbara Engelhardt in computer science. Nevertheless, statistics and machine learning here is strong, albeit too small at the moment (which will be changing soon). This is an interesting place to start, very different from most universities.

What I’m most excited about is that we get to answer the question: “What’s the best way to build a faculty, educate undergraduates, and create a PhD program starting now, focusing on the most important problems of today?”

For those who are interested, we’ll be releasing a public version of our strategic plan within about six months. We’re trying to do something unique and forward-thinking, which will hopefully make Princeton an influential member of the statistics, machine learning, and data science communities.

SimplyStats: You are organizing the Tukey conference at Princeton (to be held September 18, details here). Do you think Tukey’s influence will affect your vision for re-building statistics at Princeton?

JS: Absolutely, Tukey has been and will be a major influence in how we re-build. He made so many important contributions, and his approach was extremely forward thinking and tied into real-world problems. I strongly encourage everyone to read Tukey’s 1962 paper titled The Future of Data Analysis. Here he’s 50 years into the future, foreseeing the rise of data science. This paper has truly amazing insights, including:

For a long time I have thought I was a statistician, interested in inferences from the particular to the general. But as I have watched mathematical statistics evolve, I have had cause to wonder and to doubt.

All in all, I have come to feel that my central interest is in data analysis, which I take to include, among other things: procedures for analyzing data, techniques for interpreting the results of such procedures, ways of planning the gathering of data to make its analysis easier, more precise or more accurate, and all the machinery and results of (mathematical) statistics which apply to analyzing data.

Data analysis is a larger and more varied field than inference, or incisive procedures, or allocation.

By and large, the great innovations in statistics have not had correspondingly great effects upon data analysis. . . . Is it not time to seek out novelty in data analysis?

In this regard, another paper that has been influential in how we are re-building is Leo Breiman’s titled Statistical Modeling: The Two Cultures. We’re building something at Princeton that includes both cultures and seamlessly blends them into a bigger picture community concerned with data-driven scientific discovery and technology development.

SimplyStats: What advice would you give young statisticians getting into the discipline now?

JS: My most general advice is don’t isolate yourself within statistics. Interact with and learn from other fields. Work on problems that are important to practitioners of science and technology development. I recommend that students should master both “traditional statistics” and at least one of the following: (1) computational and algorithmic approaches to data analysis, especially those more frequently studied in machine learning or data science; (2) a substantive scientific area where data-driven discovery is extremely important (e.g., social sciences, economics, environmental sciences, genomics, neuroscience, etc.). I also recommend that students should consider publishing in scientific journals or computer science conference proceedings, in addition to traditional statistics journals. I agree with a lot of the constructive advice and commentary given on the Simply Statistics blog, such as encouraging students to learn about reproducible research, problem-driven research, software development, improving data analyses in science, and outreach to non-statisticians. These things are very important for the future of statistics.


Interview with Sherri Rose and Laura Hatfield


Sherri Rose and Laura Hatfield

Rose/Hatfield © Savannah Bergquist

Laura Hatfield and Sherri Rose are Assistant Professors specializing in biostatistics at Harvard Medical School in the Department of Health Care Policy. Laura received her PhD in Biostatistics from the University of Minnesota and Sherri completed her PhD in Biostatistics at UC Berkeley. They are developing novel statistical methods for health policy problems.

SimplyStats: Do you consider yourselves statisticians, data scientists, machine learners, or something else?

Rose: I’d definitely say a statistician. Even when I'm working on things that fall into the categories of data science or machine learning, there's underlying statistical theory guiding that process, be it for methods development or applications. Basically, there's a statistical foundation to everything I do.

Hatfield: When people ask what I do, I start by saying that I do research in health policy. Then I say I’m a statistician by training and I work with economists and physicians. People have mistaken ideas about what a statistician or professor does, so describing my context and work seems more informative. If I’m at a party, I usually wrap it up in a bow as, “I crunch numbers to study how Obamacare is working.” [laughs]


SimplyStats: What is the Health Policy Data Science Lab? How did you decide to start that?

Hatfield: We wanted to give our trainees a venue to promote their work and get feedback from their peers. And it helps me keep up on the cool projects Sherri and her students are working on.

Rose: This grew out of us starting to jointly mentor trainees. It's been a great way for us to make intellectual contributions to each other’s work through Lab meetings. Laura and I approach statistics from completely different frameworks, but work on related applications, so that's a unique structure for a lab.


SimplyStats: What kinds of problems are your groups working on these days? Are they mostly focused on health policy?

Rose: One of the fun things about working in health policy is that it is quite expansive. Statisticians can have an even bigger impact on science and public health if we take that next step: thinking about the policy implications of our research. And then, who needs to see the work in order to influence relevant policies. A couple projects I’m working on that demonstrate this breadth include a machine learning framework for risk adjustment in insurance plan payment and a new estimator for causal effects in a complex epidemiologic study of chronic disease. The first might be considered more obviously health policy, but the second will have important policy implications as well.

Hatfield: When I start an applied collaboration, I’m also thinking, “Where is the methods paper?” Most of my projects use messy observational data, so there is almost always a methods paper. For example, many studies here need to find a control group from an administrative data source. I’ve been keeping track of challenges in this process. One of our Lab students is working with me on a pathological case of a seemingly benign control group selection method gone bad. I love the creativity required in this work; my first 10 analysis ideas may turn out to be infeasible given the data, but that’s what makes this fun!


SimplyStats: What are some particular challenges of working with large health data?

Hatfield: When I first heard about the huge sample sizes, I was excited! Then I learned that data not collected for research purposes...

Rose: This was going to be my answer!

Hatfield: ...are very hard to use for research! In a recent project, I’ve been studying how giving people a tool to look up prices for medical services changes their health care spending. But the data set we have leaves out [painful pause] a lot of variables we’d like to use for control group selection and... a lot of the prices. But as I said, these gaps in the data are begging to be filled by new methods.

Rose: I think the fact that we have similar answers is important. I’ve repeatedly seen “big data” not have a strong signal for the research question, since they weren’t collected for that purpose. It’s easy to get excited about thousands of covariates in an electronic health record, but so much of it is noise, and then you end up with an R2 of 10%. It can be difficult enough to generate an effective prediction function, even with innovative tools, let alone try to address causal inference questions. It goes back to basics: what’s the research question and how can we translate that into a statistical problem we can answer given the limitations of the data.

SimplyStats: You both have very strong data science skills but are in academic positions. Do you have any advice for students considering the tradeoff between academia and industry?

Hatfield: I think there is more variance within academia and within industry than between the two.

Rose: Really? That’s surprising to me...

Hatfield: I had stereotypes about academic jobs, but my current job defies those.

Rose: What if a larger component of your research platform included programming tools and R packages? My immediate thought was about computing and its role in academia. Statisticians in genomics have navigated this better than some other areas. It can surely be done, but there are still challenges folding that into an academic career.

Hatfield: I think academia imposes few restrictions on what you can disseminate compared to industry, where there may be more privacy and intellectual property concerns. But I take your point that R packages do not impress most tenure and promotion committees.

Rose: You want to find a good match between how you like spending your time and what’s rewarded. Not all academic jobs are the same and not all industry jobs are alike either. I wrote a more detailed guest post on this topic for Simply Statistics.

Hatfield: I totally agree you should think about how you’d actually spend your time in any job you’re considering, rather than relying on broad ideas about industry versus academia. Do you love writing? Do you love coding? etc.


SimplyStats: You are both adopters of social media as a mechanism of disseminating your work and interacting with the community. What do you think of social media as a scientific communication tool? Do you find it is enhancing your careers?

Hatfield: Sherri is my social media mentor!

Rose: I think social media can be a useful tool for networking, finding and sharing neat articles and news, and putting your research out there to a broader audience. I’ve definitely received speaking invitations and started collaborations because people initially “knew me from Twitter.” It’s become a way to recruit students as well. Prospective students are more likely to “know me” from a guest post or Twitter than traditional academic products, like journal articles.

Hatfield: I’m grateful for our Lab’s new Twitter because it’s a purely academic account. My personal account has been awkwardly transitioning to include professional content; I still tweet silly things there.

Rose: My timeline might have a cat picture or two.

Hatfield: My very favorite thing about academic Twitter is discovering things I wouldn’t have even known to search for, especially packages and tricks in R. For example, that’s how I got converted to tidy data and dplyr.

Rose: I agree. I think it’s a fantastic place to become exposed to work that’s incredibly related to your own but in another field, and you wouldn’t otherwise find it preparing a typical statistics literature review.


SimplyStats: What would you change in the statistics community?

Rose: Mentoring. I was tremendously lucky to receive incredible mentoring as a graduate student and now as a new faculty member. Not everyone gets this, and trainees don’t know where to find guidance. I’ve actively reached out to trainees during conferences and university visits, erring on the side of offering too much unsolicited help, because I feel there’s a need for that. I also have a resources page on my website that I continue to update. I wish I had a more global solution beyond encouraging statisticians to take an active role in mentoring not just your own trainees. We shouldn’t lose good people because they didn’t get the support they needed.

Hatfield: I think we could make conferences much better! Being in the same physical space at the same time is very precious. I would like to take better advantage of that at big meetings to do work that requires face time. Talks are not an example of this. Workshops and hackathons and panels and working groups -- these all make better use of face-to-face time. And are a lot more fun!



If you ask different questions you get different answers - one more way science isn't broken it is just really hard

If you haven't already read the amazing piece by Christie Aschwanden on why Science isn't Broken you should do so immediately. It does an amazing job of capturing the nuance of statistics as applied to real data sets and how that can be misconstrued as science being "broken" without falling for the easy "everything is wrong" meme.

One thing that caught my eye was how the piece highlighted a crowd-sourced data analysis of soccer red cards. The key figure for that analysis is this one:


I think the figure and underlying data for this figure are fascinating in that they really highlight the human behavioral variation in data analysis and you can even see some data analysis subcultures emerging from the descriptions of how people did the analysis and justified or not the use of covariates.

One subtlety of the figure that I missed on the original reading is that not all of the estimates being reported are measuring the same thing. For example, if some groups adjusted for the country of origin of the referees and some did not, then the estimates for those two groups are measuring different things (the association conditional on country of origin or not, respectively). In this case the estimates may be different, but entirely consistent with each other, since they are just measuring different things.

If you ask two people to do the analysis and you only ask them the simple question: Are referees more likely to give  red cards to dark skinned players? then you may get a different answer based on those two estimates. But the reality is the answers the analysts are reporting are actually to the questions:

  1. Are referees more likely to give  red cards to dark skinned players holding country of origin fixed?
  2. Are referees more likely to give  red cards to dark skinned players averaging over country of origin (and everything else)?

The subtlety lies in the fact that changes to covariates in the analysis are actually changing the hypothesis you are studying.

So in fact the conclusions in that figure may all be entirely consistent after you condition on asking the same question. I'd be interested to see the same plot, but only for the groups that conditioned on the same set of covariates, for example. This is just one more reason that science is really hard and why I'm so impressed at how well the FiveThirtyEight piece captured this nuance.