Simply Statistics

25
Aug

Interview with COPSS award Winner John Storey

jdstorey

 

Editor's Note: We are again pleased to interview the COPSS President's award winner. The COPSS Award is one of the most prestigious in statistics, sometimes called the Nobel Prize in statistics. This year the award went to John Storey who also won the Mortimer Spiegelman award for his outstanding contribution to public health statistics.  This interview is a particular pleasure since John was my Ph.D. advisor and has been a major role model and incredibly supportive mentor for me throughout my career. He also did the whole interview in markdown and put it under version control at Github so it is fully reproducible. 

SimplyStats: Do you consider yourself to be a statistician, data scientist, machine learner, or something else?

JS: For the most part I consider myself to be a statistician, but I’m also very serious about genetics/genomics, data analysis, and computation. I was trained in statistics and genetics, primarily statistics. I was also exposed to a lot of machine learning during my training since Rob Tibshirani was my PhD advisor. However, I consider my research group to be a data science group. We have the Venn diagram reasonably well covered: experimentalists, programmers, data wranglers, and developers of theory and methods; biologists, computer scientists, and statisticians.

SimplyStats: How did you find out you had won the COPSS Presidents’ Award?

JS: I received a phone call from the chairperson of the awards committee while I was visiting the Department of Statistical Science at Duke University to give a seminar. It was during the seminar reception, and I stepped out into the hallway to take the call. It was really exciting to get the news!

SimplyStats: One of the areas where you have had a big impact is inference in massively parallel problems. How do you feel high-dimensional inference is different from more traditional statistical inference?

JS: My experience is that the most productive way to approach high-dimensional inference problems is to first think about a given problem in the scenario where the parameters of interest are random, and the joint distribution of these parameters is incorporated into the framework. In other words, I first gain an understanding of the problem in a Bayesian framework. Once this is well understood, it is sometimes possible to move in a more empirical and nonparametric direction. However, I have found that I can be most successful if my first results are in this Bayesian framework.

As an example, Theorem 1 from Storey (2003) Annals of Statistics was the first result I obtained in my work on false discovery rates. This paper first appeared as a technical report in early 2001, and the results spawned further work on a point estimation approach to false discovery rates, the local false discovery rate, q-value and its application to genomics, and a unified theoretical framework.

Besides false discovery rates, this approach has been useful in my work on the optimal discovery procedure as well as surrogate variable analysis (in particular, Desai and Storey 2012 for surrogate variable analysis).  For high-dimensional inference problems, I have also found it is important to consider whether there are any plausible underlying causal relationships among variables, even if causal inference in not the goal. For example, causal model considerations provided some key guidance in a recent paper of ours on testing for genetic associations in the presence of arbitrary population structure. I think there is a lot of insight to be gained by considering what is the appropriate approach for a high-dimensional inference problem under different causal relationships among the variables.

SimplyStats: Do you have a process when you are tackling a hard problem or working with students on a hard problem?

JS: I like to work on statistics research that is aimed at answering a specific scientific problem (usually in genomics). My process is to try to understand the why in the problem as much as the how. The path to success is often found in the former. I try first to find solutions to research problems by using simple tools and ideas. I like to get my hands dirty with real data as early as possible in the process. I like to incorporate some theory into this process, but I prefer methods that work really well in practice over those that have beautiful theory justifying them without demonstrated success on real-world applications. In terms of what I do day-to-day, listening to music is integral to my process, for both concentration and creative inspiration: typically King Crimson or some variant of metal or ambient – which Simply Statistics co-founder Jeff Leek got to endure enjoy for years during his PhD in my lab.

SimplyStats: You are the founding Director of the Center for Statistics and Machine Learning at Princeton. What parts of the new gig are you most excited about?

JS: Princeton closed its Department of Statistics in the early 1980s. Because of this, the style of statistician and machine learner we have here today is one who’s comfortable being appointed in a field outside of statistics or machine learning. Examples include myself in genomics, Kosuke Imai in political science, Jianqing Fan in finance and economics, and Barbara Engelhardt in computer science. Nevertheless, statistics and machine learning here is strong, albeit too small at the moment (which will be changing soon). This is an interesting place to start, very different from most universities.

What I’m most excited about is that we get to answer the question: “What’s the best way to build a faculty, educate undergraduates, and create a PhD program starting now, focusing on the most important problems of today?”

For those who are interested, we’ll be releasing a public version of our strategic plan within about six months. We’re trying to do something unique and forward-thinking, which will hopefully make Princeton an influential member of the statistics, machine learning, and data science communities.

SimplyStats: You are organizing the Tukey conference at Princeton (to be held September 18, details here). Do you think Tukey’s influence will affect your vision for re-building statistics at Princeton?

JS: Absolutely, Tukey has been and will be a major influence in how we re-build. He made so many important contributions, and his approach was extremely forward thinking and tied into real-world problems. I strongly encourage everyone to read Tukey’s 1962 paper titled The Future of Data Analysis. Here he’s 50 years into the future, foreseeing the rise of data science. This paper has truly amazing insights, including:

For a long time I have thought I was a statistician, interested in inferences from the particular to the general. But as I have watched mathematical statistics evolve, I have had cause to wonder and to doubt.

All in all, I have come to feel that my central interest is in data analysis, which I take to include, among other things: procedures for analyzing data, techniques for interpreting the results of such procedures, ways of planning the gathering of data to make its analysis easier, more precise or more accurate, and all the machinery and results of (mathematical) statistics which apply to analyzing data.

Data analysis is a larger and more varied field than inference, or incisive procedures, or allocation.

By and large, the great innovations in statistics have not had correspondingly great effects upon data analysis. . . . Is it not time to seek out novelty in data analysis?

In this regard, another paper that has been influential in how we are re-building is Leo Breiman’s titled Statistical Modeling: The Two Cultures. We’re building something at Princeton that includes both cultures and seamlessly blends them into a bigger picture community concerned with data-driven scientific discovery and technology development.

SimplyStats: What advice would you give young statisticians getting into the discipline now?

JS: My most general advice is don’t isolate yourself within statistics. Interact with and learn from other fields. Work on problems that are important to practitioners of science and technology development. I recommend that students should master both “traditional statistics” and at least one of the following: (1) computational and algorithmic approaches to data analysis, especially those more frequently studied in machine learning or data science; (2) a substantive scientific area where data-driven discovery is extremely important (e.g., social sciences, economics, environmental sciences, genomics, neuroscience, etc.). I also recommend that students should consider publishing in scientific journals or computer science conference proceedings, in addition to traditional statistics journals. I agree with a lot of the constructive advice and commentary given on the Simply Statistics blog, such as encouraging students to learn about reproducible research, problem-driven research, software development, improving data analyses in science, and outreach to non-statisticians. These things are very important for the future of statistics.

21
Aug

Interview with Sherri Rose and Laura Hatfield

 

Sherri Rose and Laura Hatfield

Rose/Hatfield © Savannah Bergquist

Laura Hatfield and Sherri Rose are Assistant Professors specializing in biostatistics at Harvard Medical School in the Department of Health Care Policy. Laura received her PhD in Biostatistics from the University of Minnesota and Sherri completed her PhD in Biostatistics at UC Berkeley. They are developing novel statistical methods for health policy problems.

SimplyStats: Do you consider yourselves statisticians, data scientists, machine learners, or something else?

Rose: I’d definitely say a statistician. Even when I'm working on things that fall into the categories of data science or machine learning, there's underlying statistical theory guiding that process, be it for methods development or applications. Basically, there's a statistical foundation to everything I do.

Hatfield: When people ask what I do, I start by saying that I do research in health policy. Then I say I’m a statistician by training and I work with economists and physicians. People have mistaken ideas about what a statistician or professor does, so describing my context and work seems more informative. If I’m at a party, I usually wrap it up in a bow as, “I crunch numbers to study how Obamacare is working.” [laughs]

 

SimplyStats: What is the Health Policy Data Science Lab? How did you decide to start that?

Hatfield: We wanted to give our trainees a venue to promote their work and get feedback from their peers. And it helps me keep up on the cool projects Sherri and her students are working on.

Rose: This grew out of us starting to jointly mentor trainees. It's been a great way for us to make intellectual contributions to each other’s work through Lab meetings. Laura and I approach statistics from completely different frameworks, but work on related applications, so that's a unique structure for a lab.

 

SimplyStats: What kinds of problems are your groups working on these days? Are they mostly focused on health policy?

Rose: One of the fun things about working in health policy is that it is quite expansive. Statisticians can have an even bigger impact on science and public health if we take that next step: thinking about the policy implications of our research. And then, who needs to see the work in order to influence relevant policies. A couple projects I’m working on that demonstrate this breadth include a machine learning framework for risk adjustment in insurance plan payment and a new estimator for causal effects in a complex epidemiologic study of chronic disease. The first might be considered more obviously health policy, but the second will have important policy implications as well.

Hatfield: When I start an applied collaboration, I’m also thinking, “Where is the methods paper?” Most of my projects use messy observational data, so there is almost always a methods paper. For example, many studies here need to find a control group from an administrative data source. I’ve been keeping track of challenges in this process. One of our Lab students is working with me on a pathological case of a seemingly benign control group selection method gone bad. I love the creativity required in this work; my first 10 analysis ideas may turn out to be infeasible given the data, but that’s what makes this fun!

 

SimplyStats: What are some particular challenges of working with large health data?

Hatfield: When I first heard about the huge sample sizes, I was excited! Then I learned that data not collected for research purposes...

Rose: This was going to be my answer!

Hatfield: ...are very hard to use for research! In a recent project, I’ve been studying how giving people a tool to look up prices for medical services changes their health care spending. But the data set we have leaves out [painful pause] a lot of variables we’d like to use for control group selection and... a lot of the prices. But as I said, these gaps in the data are begging to be filled by new methods.

Rose: I think the fact that we have similar answers is important. I’ve repeatedly seen “big data” not have a strong signal for the research question, since they weren’t collected for that purpose. It’s easy to get excited about thousands of covariates in an electronic health record, but so much of it is noise, and then you end up with an R2 of 10%. It can be difficult enough to generate an effective prediction function, even with innovative tools, let alone try to address causal inference questions. It goes back to basics: what’s the research question and how can we translate that into a statistical problem we can answer given the limitations of the data.

SimplyStats: You both have very strong data science skills but are in academic positions. Do you have any advice for students considering the tradeoff between academia and industry?

Hatfield: I think there is more variance within academia and within industry than between the two.

Rose: Really? That’s surprising to me...

Hatfield: I had stereotypes about academic jobs, but my current job defies those.

Rose: What if a larger component of your research platform included programming tools and R packages? My immediate thought was about computing and its role in academia. Statisticians in genomics have navigated this better than some other areas. It can surely be done, but there are still challenges folding that into an academic career.

Hatfield: I think academia imposes few restrictions on what you can disseminate compared to industry, where there may be more privacy and intellectual property concerns. But I take your point that R packages do not impress most tenure and promotion committees.

Rose: You want to find a good match between how you like spending your time and what’s rewarded. Not all academic jobs are the same and not all industry jobs are alike either. I wrote a more detailed guest post on this topic for Simply Statistics.

Hatfield: I totally agree you should think about how you’d actually spend your time in any job you’re considering, rather than relying on broad ideas about industry versus academia. Do you love writing? Do you love coding? etc.

 

SimplyStats: You are both adopters of social media as a mechanism of disseminating your work and interacting with the community. What do you think of social media as a scientific communication tool? Do you find it is enhancing your careers?

Hatfield: Sherri is my social media mentor!

Rose: I think social media can be a useful tool for networking, finding and sharing neat articles and news, and putting your research out there to a broader audience. I’ve definitely received speaking invitations and started collaborations because people initially “knew me from Twitter.” It’s become a way to recruit students as well. Prospective students are more likely to “know me” from a guest post or Twitter than traditional academic products, like journal articles.

Hatfield: I’m grateful for our Lab’s new Twitter because it’s a purely academic account. My personal account has been awkwardly transitioning to include professional content; I still tweet silly things there.

Rose: My timeline might have a cat picture or two.

Hatfield: My very favorite thing about academic Twitter is discovering things I wouldn’t have even known to search for, especially packages and tricks in R. For example, that’s how I got converted to tidy data and dplyr.

Rose: I agree. I think it’s a fantastic place to become exposed to work that’s incredibly related to your own but in another field, and you wouldn’t otherwise find it preparing a typical statistics literature review.

 

SimplyStats: What would you change in the statistics community?

Rose: Mentoring. I was tremendously lucky to receive incredible mentoring as a graduate student and now as a new faculty member. Not everyone gets this, and trainees don’t know where to find guidance. I’ve actively reached out to trainees during conferences and university visits, erring on the side of offering too much unsolicited help, because I feel there’s a need for that. I also have a resources page on my website that I continue to update. I wish I had a more global solution beyond encouraging statisticians to take an active role in mentoring not just your own trainees. We shouldn’t lose good people because they didn’t get the support they needed.

Hatfield: I think we could make conferences much better! Being in the same physical space at the same time is very precious. I would like to take better advantage of that at big meetings to do work that requires face time. Talks are not an example of this. Workshops and hackathons and panels and working groups -- these all make better use of face-to-face time. And are a lot more fun!

 

20
Aug

If you ask different questions you get different answers - one more way science isn't broken it is just really hard

If you haven't already read the amazing piece by Christie Aschwanden on why Science isn't Broken you should do so immediately. It does an amazing job of capturing the nuance of statistics as applied to real data sets and how that can be misconstrued as science being "broken" without falling for the easy "everything is wrong" meme.

One thing that caught my eye was how the piece highlighted a crowd-sourced data analysis of soccer red cards. The key figure for that analysis is this one:

 

I think the figure and underlying data for this figure are fascinating in that they really highlight the human behavioral variation in data analysis and you can even see some data analysis subcultures emerging from the descriptions of how people did the analysis and justified or not the use of covariates.

One subtlety of the figure that I missed on the original reading is that not all of the estimates being reported are measuring the same thing. For example, if some groups adjusted for the country of origin of the referees and some did not, then the estimates for those two groups are measuring different things (the association conditional on country of origin or not, respectively). In this case the estimates may be different, but entirely consistent with each other, since they are just measuring different things.

If you ask two people to do the analysis and you only ask them the simple question: Are referees more likely to give  red cards to dark skinned players? then you may get a different answer based on those two estimates. But the reality is the answers the analysts are reporting are actually to the questions:

  1. Are referees more likely to give  red cards to dark skinned players holding country of origin fixed?
  2. Are referees more likely to give  red cards to dark skinned players averaging over country of origin (and everything else)?

The subtlety lies in the fact that changes to covariates in the analysis are actually changing the hypothesis you are studying.

So in fact the conclusions in that figure may all be entirely consistent after you condition on asking the same question. I'd be interested to see the same plot, but only for the groups that conditioned on the same set of covariates, for example. This is just one more reason that science is really hard and why I'm so impressed at how well the FiveThirtyEight piece captured this nuance.

 

 

19
Aug

P > 0.05? I can make any p-value statistically significant with adaptive FDR procedures

Everyone knows now that you have to correct for multiple testing when you calculate many p-values otherwise this can happen:

http://xkcd.com/882/

 

One of the most popular ways to correct for multiple testing is to estimate or control the false discovery rate. The false discovery rate attempts to quantify the fraction of made discoveries that are false. If we call all p-values less than some threshold t significant, then borrowing notation from this great introduction to false discovery rates 

fdr3

 

So F(t) is the (unknown) total number of null hypotheses called significant and S(t) is the total number of hypotheses called significant. The FDR is the expected ratio of these two quantities, which, under certain assumptions can be approximated by the ratio of the expectations.

 

fdr4

 

To get an estimate of the FDR we just need an estimate for  E[F(t)]  and E[S(t)]. The latter is pretty easy to estimate as just the total number of rejections (the number of p < t). If you assume that the p-values follow the expected distribution then E[F(t)]  can be approximated by multiplying the fraction of null hypotheses, multiplied by the total number of hypotheses and multiplied by t since the p-values are uniform. To do this, we need an estimate for \pi_0, the proportion of null hypotheses. There are a large number of ways to estimate this quantity but it is almost always estimated using the full distribution of computed p-values in an experiment. The most popular estimator compares the fraction of p-values greater than some cutoff to the number you would expect if every single hypothesis were null. This fraction is about the fraction of null hypotheses.

Combining the above equation with our estimates for E[F(t)]  and E[S(t)] we get:

 

fdr5

 

The q-value is a multiple testing analog of the p-value and is defined as:

fdr6

 

This is of course a very loose version of this and you can get a more technical description here. But the main thing to notice is that the q-value depends on the estimated proportion of null hypotheses, which depends on the distribution of the observed p-values. The smaller the estimated fraction of null hypotheses, the smaller the FDR estimate and the smaller the q-value. This suggests a way to make any p-value significant by altering its "testing partners". Here is a quick example. Suppose that we have done a test and have a p-value of 0.8. Not super significant. Suppose we perform this test in conjunction with a number of hypotheses that are null generating a p-value distribution like this.

uniform-pvals

Then you get a q-value greater than 0.99 as you would expect. But if you test that exact same p-value with a ton of other non-null hypotheses that generate tiny p-values in a distribution that looks like this:

significant-pvals

 

Then you get a q-value of 0.0001 for that same p-value of 0.8. The reason is that the estimate of the fraction of null hypotheses goes essentially to zero, which drives down the q-value. You can do this with any p-value, if you make its testing partners have sufficiently low p-values then the q-value will also be as small as you like.

A couple of things to note:

  • Obviously doing this on purpose to change the significance of a calculated p-value is cheating and shouldn't be done.
  • For correctly calculated p-values on a related set of hypotheses this is actually a sensible property to have - if you have almost all very small p-values and one very large p-value, you are doing a set of tests where almost everything appears to be alternative and you should weight that in some sensible way.
  • This is the reason that sometimes a "multiple testing adjusted" p-value (or q-value) is smaller than the p-value itself.
  • This doesn't affect non-adaptive FDR procedures - but those procedures still depend on the "testing partners" of any p-value through the total number of tests performed. This is why people talk about the so-called "multiple testing burden". But that is a subject for a future post. It is also the reason non-adaptive procedures can be severely underpowered compared to adaptive procedures when the p-values are correct.
  • I've appended the code to generate the histograms and calculate the q-values in this post in the following gist.

 

09
Aug

Interested in analyzing images of brains? Get started with open access data.

Editor's note: This is a guest post by Ani Eloyan. She is an Assistant Professor of Biostatistics at Brown University. Dr. Eloyan’s work focuses on semi-parametric likelihood based methods for matrix decompositions, statistical analyses of brain images, and the integration of various types of complex data structures for analyzing health care data. She received her PhD in statistics from North Carolina State University and subsequently completed a postdoctoral fellowship in the Department of Biostatistics at Johns Hopkins University. Dr. Eloyan and her team won the ADHD200 Competition discussed in this article. She tweets @eloyan_ani.
 
Neuroscience is one of the exciting new fields for biostatisticians interested in real world applications where they can contribute novel statistical approaches. Most research in brain imaging has historically included studies run for small numbers of patients. While justified by the costs of data collection, the claims based on analyzing data for such small numbers of subjects often do not hold for our populations of interest. As discussed in this article, there is a huge demand for biostatisticians in the field of quantitative neuroscience; so called neuroquants or neurostatisticians. However, while more statisticians are interested in the field, we are far from competing with other substantive domains. For instance, a quick search of abstract keywords in the online program of the upcoming JSM2015 conference of “brain imaging” and “neuroscience” results in 15 records, while a search of the words “genomics” and “genetics” generates 76 records.
Assuming you are trained in statistics and an aspiring neuroquant, how would you go about working with brain imaging data? As a graduate student in the Department of Statistics at NCSU several years ago, I was very interested in working on statistical methods that would be directly applicable to solve problems in neuroscience. But I had this same question: “Where do I find the data?” I soon learned that to reallyapproach substantial relevant problems I also needed to learn about the subject matter underlying these complex data structures.
In recent years, several leading groups have uploaded their lab data with the common goal of fostering the collection of high dimensional brain imaging data to build powerful models that can give generalizable results. Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC) founded in 2006 is a platform for public data sharing that facilitates streamlining data processing pipelines and compiling high dimensional imaging datasets for crowdsourcing the analyses. It includes data for people with neurological diseases and neurotypical children and adults. If you are interested in Alzheimer’s disease, you can check out ADNI. ABIDE provides data for people with Autism Spectrum Disorder and neurotypical peers. ADHD200 was released in 2011 as a part of a competition to motivate building predictive methods for disease diagnoses using functional magnetic resonance imaging (MRI) in addition to demographic information to predict whether a child has attention deficit hyperactivity disorder (ADHD). While the competition ended in 2011, the dataset has been widely utilized afterwards in studies of ADHD.  According to Google Scholar, the paper introducing the ABIDE set has been cited 129 times since 2013 while the paper discussing the ADHD200 has been cited 51 times since 2012. These are only a few examples from the list of open access datasets that could of utilized by statisticians. 
Anyone can download these datasets (you may need to register and complete some paperwork in some cases), however, there are several data processing and cleaning steps to perform before the final statistical analyses. These preprocessing steps can be daunting for a statistician new to the field, especially as the tools used for preprocessing may not be available in R. This discussion makes the case as to why statisticians need to be involved in every step of preprocessing the data, while this R package contains new tools linking R to a commonly used platform FSL. However, as a newcomer, it can be easier to start with data that are already processed. This excellent overview by Dr. Martin Lindquist provides an introduction to the different types of analyses for brain imaging data from a statisticians point of view, while ourpaper provides tools in R and example datasets for implementing some of these methods. At least one course on Coursera can help you get started with functional MRI data. Talking to and reading the papers of biostatisticians working in the field of quantitative neuroscience and scientists in the field of neuroscience is the key.
30
Jul

Autonomous killing machines won't look like the Terminator...and that is why they are so scary

Just a few days ago many of the most incredible minds in science and technology urged governments to avoid using artificial intelligence to create autonomous killing machines. One thing that always happens when such a warning is put into place is you see the inevitable Terminator picture:

 

terminator

 

The reality is that robots that walk and talk are getting better but still have a ways to go:

 

 

Does this mean that I think all those really smart people are silly for making this plea about AI now though? No, I think they are probably just in time.

The reason is that the first autonomous killing machines will definitely not look anything like the Terminator. They will more likely than not be drones, that are already in widespread use by the military, and will soon be flying over our heads delivering Amazon products.

 

drone

 

I also think that when people think about "artificial intelligence" they also think about robots that can mimic the behaviors of a human being, including the ability to talk, hold a conversation, or pass the Turing test. But it turns out that the "artificial intelligence" you would need to create an automated killing system is much much simpler than that and is mostly some basic data science. The things you would need are:

  1. A drone with the ability to fly on its own
  2. The ability to make decisions about what people to target
  3. The ability to find those people and attack them

 

The first issue, being able to fly on autopilot, is something that has existed for a while. You have probably flown on a plane that has used autopilot for at least some of the flight. I won't get into the details on this one because I think it is the least interesting - it has been around a while and we didn't get the dire warnings about autonomous agents.

The second issue, about deciding which people to target is already in existence as well. We have already seen programs like PRISM and others that collect individual level metadata and presumably use those to make predictions. We have already seen programs like PRISM and others that collect individual level metadata and presumably use those to make predictions. While the true and false positive rates are probably messed up by the fact that there are very very few "true positives" these programs are being developed and even relatively simple statistical models can be used to build a predictor - even if those don't work.

The second issue is being able to find people to attack them. This is where the real "artificial intelligence" comes in to play. But it isn't artificial intelligence like you might think about. It could be just as simple as having the drone fly around and take people's pictures. Then we could use those pictures to match up with the people identified through metadata and attack them. Facebook has a paper up that demonstrates an algorithm that can identify people with near human level accuracy. This approach is based on something called deep neural nets, which sounds very intimidating, but is actually just a set of nested nonlinear logistic regression models. These models have gotten very good because (a) we are getting better at fitting them mathematically and computationally but mostly (b) we have much more data to train them with than we ever did before. The speed that this part of the process is developing is (I think) why there is so much recent concern about potentially negative applications like autonomous killing machines.

The scary thing is that these technologies could be combined *right now* to create such a system that was not controlled directly by humans but made automated decisions and flew drones to carry out those decisions. The technology to shrink these type of deep neural net systems to identify people is so good it can even be made simple enough to run on a phone for things like language translation and could easily be embedded in a drone.

So I am with Musk, Hawking, and others who would urge caution by governments in developing these systems. Just because we can make it doesn't mean it will do what we want. Just look at how well Facebook/Amazon/Google make suggestions for "other things you might like" to get an idea about how potentially disastrous automated killing systems could be.

 

17
Jul

The statistics department Moneyball opportunity

Moneyball is a book and a movie about Billy Bean. It makes statisticians look awesome and I loved the movie. I loved it so much I'm putting the movie trailer right here:

The basic idea behind Moneyball was that the Oakland Athletics were able to build a very successful baseball team on a tight budget by valuing skills that many other teams undervalued. In baseball those skills were things like on-base percentage and slugging percentage. By correctly valuing these skills and their impact on a teams winning percentage, the A's were able to build one of the most successful regular season teams on a minimal budget. This graph shows what an outlier they were, from a nice fivethirtyeight analysis.

 

oakland

 

I think that the data science/data analysis revolution that we have seen over the last decade has created a similar moneyball opportunity for statistics and biostatistics departments. Traditionally in these departments the highest value activities have been publishing a select number of important statistics journals (JASA, JRSS-B, Annals of Statistics, Biometrika, Biometrics and more recently journals like Biostatistics and Annals of Applied Statistics). But there are some hugely valuable ways to contribute to statistics/data science that don't necessarily end with papers in those journals like:

  1. Creating good, well-documented, and widely used software
  2. Being primarily an excellent collaborator who brings in grant money and is a major contributor to science through statistics
  3. Publishing in top scientific journals rather than statistics journals
  4. Being a good scientific communicator who can attract talent
  5. Being a statistics educator who can build programs

Another thing that is undervalued is not having a Ph.D. in statistics or biostatistics. The fact that these skills are undervalued right now means that up and coming departments could identify and recruit talented people that might be missed by other departments and have a huge impact on the world. One tricky thing is that the rankings of department are based on the votes of people from other departments who may or may not value these same skills. Another tricky thing is that many industry data science positions put incredibly high value on these skills and so you might end up competing with them for people - a competition that will definitely drive up the market value of these data scientist/statisticians. But for the folks that want to stay in academia, now is a prime opportunity.

10
Jun

Johns Hopkins Data Science Specialization Captsone 2 Top Performers

The second capstone session of the Johns Hopkins Data Science Specialization concluded recently. This time, we had 1,040 learners sign up to participate in the session, which again featured a project developed in collaboration with the amazingly innovative folks at SwiftKey

We've identified the learners listed below as the top performers in this capstone session. This is an incredibly talented group of people who have worked very hard throughout the entire nine-course specialization.  Please take some time to read their stories and look at their work. 

Ben Apple

Ben_Apple

Ben Apple is a Data Scientist and Enterprise Architect with the Department of Defense.  Mr. Apple holds a MS in Information Assurance and is a PhD candidate in Information Sciences.

Why did you take the JHU Data Science Specialization?

As a self trained data scientist I was looking for a program that would formalize my established skills while expanding my data science knowledge and tool box.

What are you most proud of doing as part of the JHU Data Science Specialization?

The capstone project was the most demanding aspect of the program.  As such I most proud of the finale project.  The project stretched each of us beyond the standard course work of the program and was quite satisfying.

How are you planning on using your Data Science Specialization Certificate?

To open doors so that I may further my research into the operational value of applying data science thought and practice to analytics of my domain.

Final Project: https://bengapple.shinyapps.io/coursera_nlp_capstone

Project Slide Deck: http://rpubs.com/bengapple/71376

 

Ivan Corneillet

Ivan.Corneillet

A technologist, thinker, and tinkerer, Ivan facilitates the establishment of start-up companies by advising these companies about the hiring process, product development, and technology development, including big data, cloud computing, and cybersecurity. In his 17-year career, Ivan has held a wide range of engineering and management positions at various Silicon Valley companies. Ivan is a recent Wharton MBA graduate, and he previously earned his master’s degree in computer science from the Ensimag, and his master’s degree in electrical engineering from Université Joseph Fourier, both located in France.

Why did you take the JHU Data Science Specialization?

There are three reasons why I decided to enroll in the JHU Data Science Specialization. First, fresh from college, my formal education was best suited for scaling up the Internet’s infrastructure. However, because every firm in every industry now creates products and services from analyses of data, I challenged myself to learn about Internet-scale datasets. Second, I am a big supporter of MOOCs. I do not believe that MOOCs should replace traditional education; however, I do believe that MOOCs and traditional education will eventually coexist in the same way that open-source and closed-source software does (read my blog post for more information on this topic: http://ivantur.es/16PHild). Third, the Johns Hopkins University brand certainly motivated me to choose their program. With a great name comes a great curriculum and fantastic professors, right?
Once I had completed the program, I was not disappointed at all. I had read a blog post that explained that the JHU Data Science Specialization was only a start to learning about data science. I certainly agree, but I would add that this program is a great start, because the curriculum emphasizes information that is crucial, while providing additional resources to those who wish to deepen their understanding of data science. My thanks to Professors Caffo, Leek, and Peng; the TAs, and Coursera for building and delivering this track!

What are you most proud of doing as part of the JHU Data Science Specialization?

The capstone project made for a very rich and exhilarating learning experience, and was my favorite course in the specialization. Because I did not have prior knowledge in natural language processing (NLP), I had to conduct a fair amount of research. However, the program’s minimal-guidance approach mimicked a real-world environment, and gave me the opportunity to leverage my experience with developing code and designing products to get the most out of the skillset taught in the track. The result was that I created a data product that implemented state-of-the-art NLP algorithms using what I think are the best technologies (i.e., C++, JavaScript, R, Ruby, and SQL), given the choices that I had made. Bringing everything together is what made me the most proud. Additionally, my product capabilities are a far cry from IBM’s Watson, but while I am well versed in supercomputer hardware, this track helped me to gain a much deeper appreciation of Watson’s AI.

How are you planning on using your Data Science Specialization Certificate?

Thanks to the broad skillset that the specialization covered, I feel confident wearing a data science hat. The concepts and tools covered in this program helped me to better understand the concerns that data scientists have and the challenges they face. From a business standpoint, I am also better equipped to identify the opportunities that lie ahead.

Final Project: https://paspeur.shinyapps.io/wordmaster-io/

Project Slide Deck: http://rpubs.com/paspeur/wordmaster-io

Oscar de León

Oscar_De_Leon

Oscar is an assistant researcher at a research institute in a developing country, graduated as a licentiate in biochemistry and microbiology in 2010 from the same university which hosts the institute. He has always loved technology, programming and statistics and has engaged in self learning of these subjects from an early age, finally using his abilities in the health-related research in which he has been involved since 2008. He is now working on the design, execution and analysis of various research projects, consulting for other researchers and students, and is looking forward to develop his academic career in biostatistics.

Why did you take the JHU Data Science Specialization?

I wanted to integrate my R experience into a more comprehensive data analysis workflow, which is exactly what this specialization offers. This was in line with the objectives of my position at the research institute in which I work, so I presented a study plan to my supervisor and she approved it. I also wanted to engage in an activity which enabled me to document my abilities in a verifiable way, and a Coursera Specialization seemed like a good option.

Additionally, I've followed the JHSPH group's courses since the first offering of Mathematical Biostatistics Bootcamp in November 2012. They have proved the standards and quality of education at their institution, and it was not something to let go by.

What are you most proud of doing as part of the JHU Data Science Specialization?

I'm not one to usually interact with other students, and certainly didn't do it during most of the specialization courses, but I decided to try out the fora on the Capstone project. It was wonderful; sharing ideas with, and receiving criticism form, my peers provided a very complete learning experience. After all, my contributions ended being appreciated by the community and a few posts stating it were very rewarding. This re-kindled my passion for teaching, and I'll try to engage in it more from now on.

How are you planning on using your Data Science Specialization Certificate?

First, I'll file it with HR at my workplace, since our research projects payed for the specialization :)

I plan to use the certificate as a credential for data analysis with R when it is relevant. For example, I've been interested in offering an R workshop for life sciences students and researchers at my University, and this certificate (and the projects I prepared during the specialization) could help me show I have a working knowledge on the subject.

Final Project: https://odeleon.shinyapps.io/ngram/

Project Slide Deck: http://rpubs.com/chemman/n-gram

Jeff Hedberg

Jeff_Hedberg

I am passionate about turning raw data into actionable insights that solve relevant business problems. I also greatly enjoy leading large, multi-functional projects with impact in areas pertaining to machine and/or sensor data.  I have a Mechanical Engineering Degree and an MBA, in addition to a wide range of Data Science (IT/Coding) skills.

Why did you take the JHU Data Science Specialization?

I was looking to gain additional exposure into Data Science as a current practitioner, and thought this would be a great program.

What are you most proud of doing as part of the JHU Data Science Specialization?

I am most proud of completing all courses with distinction (top of peers).  Also, I'm proud to have achieved full points on my Capstone project having no prior experience in Natural Language Processing.

How are you planning on using your Data Science Specialization Certificate?

I am going to add this to my Resume and LinkedIn Profile.  I will use it to solidify my credibility as a data science practitioner of value.

Final Project: https://hedbergjeffm.shinyapps.io/Next_Word_Prediction/

Project Slide Deck: https://rpubs.com/jhedbergfd3s/74960

Hernán Martínez-Foffani

Hernán_Martínez-Foffani

I was born in Argentina but now I'm settled in Spain. I've been working in computer technology since the eighties, in digital networks, programming, consulting, project management. Now, as CTO in a software company, I lead a small team of programmers developing a supply chain management app.

Why did you take the JHU Data Science Specialization?

In my opinion the curriculum is carefully designed with a nice balance between theory and practice. The JHU authoring and the teachers' widely known prestige ensure the content quality. The ability to choose the learning pace, one per month in my case, fits everyone's schedule.

What are you most proud of doing as part of the JHU Data Science Specialization?

The capstone definitely. It resulted in a fresh and interesting challenge. I sweat a lot, learned much more and in the end had a lot of fun.

How are you planning on using your Data Science Specialization Certificate?

While for the time being I don't have any specific plan for the certificate, it's a beautiful reward for the effort done.

Final Project: https://herchu.shinyapps.io/shinytextpredict/

Project Slide Deck: http://rpubs.com/herchu1/shinytextprediction

Francois Schonken

 

Francois Schonken

I'm a 36 year old South African male born and raised. I recently (4 years now) immigrated to lovely Melbourne, Australia. I wrapped up a BSc (Hons) Computer Science with specialization in Computer Systems back in 2001. Next I co-found a small boutique Software Development house operating from South Africa. I wrapped my MBA, from Melbourne Business School, in 2013 and now I consult for my small boutique Software Development house and 2 (very) small internet start-ups.

Why did you take the JHU Data Science Specialization?

One of the core subjects in my MBA was Data Analysis, basically an MBA take on undergrad Statistics with focus on application over theory (not that there was any shortage of theory). Waiting in a lobby room some 6 months later I was paging through the financial section of business focused weekly. I came across an article explaining how a Melbourne local applied a language called R to solve a grammatically and statistically challenging issue. The rest, as they say, is history.

What are you most proud of doing as part of the JHU Data Science Specialization?

I'm quite proud of both my Developing Data Products and Capstone projects, but for me these tangible outputs merely served as a vehicle to better understand a different way of thinking about data. I've spend most of my Software Development life dealing with one form or the other form of RDBS (Relational Database Management System). This, in my experience, leads to a very set oriented way of thinking about data.

I'm most proud of developing a new tool in my "Skills Toolbox" that I consider highly complementary to both my Software and Business outlook on projects.

How are you planning on using your Data Science Specialization Certificate?

Honestly, I had not planned on using my Certificate in and of itself. The skills I've acquired has already helped shape my thinking in designing an in-house web based consulting collaboration platform.

I do not foresee this being the last time I'll be applying Data Science thinking moving forward on my journey.

Final Project: https://schonken.shinyapps.io/WordPredictor

Project Slide Deck: http://rpubs.com/schonken/sentence-builder

David J. Tagler

 

David J. Tagler

David is passionate about solving the world’s most important and challenging problems. His expertise spans chemical/biomedical engineering, regenerative medicine, healthcare technology management, information technology/security, and data science/analysis. David earned his Ph.D. in Chemical Engineering from Northwestern University and B.S. in Chemical Engineering from the University of Notre Dame.

Why did you take the JHU Data Science Specialization?

I enrolled in this specialization in order to advance my statistics, programming, and data analysis skills. I was interested in taking a series of courses that covered the entire data science pipeline. I believe that these skills will be critical for success in the future.

What are you most proud of doing as part of the JHU Data Science Specialization?

I am most proud of the R programming and modeling skills that I developed throughout this specialization. Previously, I had no experience with R. Now, I can effectively manage complex data sets, perform statistical analyses, build prediction models, create publication-quality figures, and deploy web applications.

How are you planning on using your Data Science Specialization Certificate?

I look forward to utilizing these skills in future research projects. Furthermore, I plan to take additional courses in data science, machine learning, and bioinformatics.

Final Project: http://dt444.shinyapps.io/next-word-predict

Project Slide Deck: http://rpubs.com/dt444/next-word-predict

Melissa Tan

 

MelissaTan

I'm a financial journalist from Singapore. I did philosophy and computer science at the University of Chicago, and I'm keen on picking up more machine learning and data viz skills.

Why did you take the JHU Data Science Specialization?

I wanted to keep up with coding, while learning new tools and techniques for wrangling and analyzing data that I could potentially apply to my job. Plus, it sounded fun. :)

What are you most proud of doing as part of the JHU Data Science Specialization?

Building a word prediction app pretty much from scratch (with a truckload of forum reading). The capstone project seemed insurmountable initially and ate up all my weekends, but getting the app to work passably was worth it.

How are you planning on using your Data Science Specialization Certificate?

It'll go on my CV, but I think it's more important to be able to actually do useful things. I'm keeping an eye out for more practical opportunities to apply and sharpen what I've learnt.

Final Project: https://melissatan.shinyapps.io/word_psychic/

Project Slide Deck: https://rpubs.com/melissatan/capstone

Felicia Yii

FeliciaYii

Felicia likes to dream, think and do. With over 20 years in the IT industry, her current fascination is at the intersection of people, information and decision-making.  Ever inquisitive, she has acquired an expertise in subjects as diverse as coding to cookery to costume making to cosmetics chemistry. It’s not apparent that there is anything she can’t learn to do, apart from housework.  Felicia lives in Wellington, New Zealand with her husband, two children and two cats.

Why did you take the JHU Data Science Specialization?

Well, I love learning and the JHU Data Science Specialization appealed to my thirst for a new challenge. I'm really interested in how we can use data to help people make better decisions.  There's so much data out there these days that it is easy to be overwhelmed by it all. Data visualisation was at the heart of my motivation when starting out. As I got into the nitty gritty of the course, I really began to see the power of making data accessible and appealing to the data-agnostic world. There's so much potential for data science thinking in my professional work.

What are you most proud of doing as part of the JHU Data Science Specialization?

Getting through it for starters while also working and looking after two children. Seriously though, being able to say I know what 'practical machine learning' is all about.  Before I started the course, I had limited knowledge of statistics, let alone knowing how to apply them in a machine learning context.  I was thrilled to be able to use what I learned to test a cool game concept in my final project.

How are you planning on using your Data Science Specialization Certificate?

I want to use what I have learned in as many ways possible. Firstly, I see opportunities to apply my skills to my day-to-day work in information technology. Secondly, I would like to help organisations that don't have the skills or expertise in-house to apply data science thinking to help their decision making and communication. Thirdly, it would be cool one day to have my own company consulting on data science. I've more work to do to get there though!

Final Project: https://micasagroup.shinyapps.io/nwpgame/

Project Slide Deck: https://rpubs.com/MicasaGroup/74788

 

08
Jun

I'm a data scientist - mind if I do surgery on your heart?

There has been a lot of recent interest from scientific journals and from other folks in creating checklists for data science and data analysis. The idea is that the checklist will help prevent results that won't reproduce or replicate from the literature. One analogy that I'm frequently hearing is the analogy with checklists for surgeons that can help reduce patient mortality.

The one major difference between checklists for surgeons and checklists I'm seeing for research purposes is the difference in credentialing between people allowed to perform surgery and people allowed to perform complex data analysis. You would never let me do surgery on you. I have no medical training at all. But I'm frequently asked to review papers that include complicated and technical data analyses, but have no trained data analysts or statisticians. The most common approach is that a postdoc or graduate student in the group is assigned to do the analysis, even if they don't have much formal training. Whenever this happens red flags are up all over the place. Just like I wouldn't trust someone without years of training and a medical license to do surgery on me, I wouldn't let someone without years of training and credentials in data analysis make major conclusions from complex data analysis.

You might argue that the consequences for surgery and for complex data analysis are on completely different scales. I'd agree with you, but not in the direction that you might think. I would argue that high pressure and complex data analysis can have much larger consequences than surgery. In surgery there is usually only one person that can be hurt. But if you do a bad data analysis, say claiming say that vaccines cause autism, that can have massive consequences for hundreds or even thousands of people. So complex data analysis, especially for important results, should be treated with at least as much care as surgery.

The reason why I don't think checklists alone will solve the problem is that they are likely to be used by people without formal training. One obvious (and recent) example that I think makes this really clear is the HealthKit data we are about to start seeing. A ton of people signed up for studies on their iPhones and it has been all over the news. The checklist will (almost certainly) say to have a big sample size. HealthKit studies will certainly pass the checklist, but they are going to get Truman/Deweyed big time if they aren't careful about biased sampling.

If I walked into an operating room and said I'm going to start dabbling in surgery I would be immediately thrown out. But people do that with statistics and data analysis all the time. What they really need is to require careful training and expertise in data analysis on each paper that analyzes data. Until we treat it as a first class component of the scientific process we'll continue to see retractions, falsifications, and irreproducible results flourish.
01
Jun

Interview with Chris Wiggins, chief data scientist at the New York Times

Editor's note: We are trying something a little new here and doing an interview with Google Hangouts on Air. The interview will be live at 11:30am EST. I have some questions lined up for Chris, but if you have others you'd like to ask, you can tweet them @simplystats and I'll see if I can work them in. After the livestream we'll leave the video on Youtube so you can check out the interview if you can't watch the live stream. I'm embedding the Youtube video here but if you can't see the live stream when it is running go check out the event page: https://plus.google.com/events/c7chrkg0ene47mikqrvevrg3a4o.