Category: Uncategorized

14
Mar

π day special: How to use Bioconductor to find empirical evidence in support of π being a normal number

Editor's note: Today 3/14/15 at some point between  9:26:53 and 9:26:54 it was the most π day of them all. Below is a repost from last year. 

Happy π day everybody!

I wanted to write some simple code (included below) to the test parallelization capabilities of my  new cluster. So, in honor of  π day, I decided to check for evidence that π is a normal number. A normal number is a real number whose infinite sequence of digits has the property that picking any given random m digit pattern is 10−m. For example, using the Poisson approximation, we can predict that the pattern "123456789" should show up between 0 and 3 times in the first billion digits of π (it actually shows up twice starting, at the 523,551,502-th and  773,349,079-th decimal places).

To test our hypothesis, let Y1, ..., Y100 be the number of "00", "01", ...,"99" in the first billion digits of  π. If  π is in fact normal then the Ys should be approximately IID binomials with N=1 billon and p=0.01.  In the qq-plot below I show Z-scores (Y - 10,000,000) /  √9,900,000) which appear to follow a normal distribution as predicted by our hypothesis. Further evidence for π being normal is provided by repeating this experiment for 3,4,5,6, and 7 digit patterns (for 5,6 and 7 I sampled 10,000 patterns). Note that we can perform a chi-square test for the uniform distribution as well. For patterns of size 1,2,3 the p-values were 0.84, 0.89, 0.92, and 0.99.

pi

Another test we can perform is to divide the 1 billion digits into 100,000 non-overlapping segments of length 10,000. The vector of counts for any given pattern should also be binomial. Below I also include these qq-plots.

pi2

These observed counts should also be independent, and to explore this we can look at autocorrelation plots:

piacf

To do this in about an hour and with just a few lines of code (included below), I used the Bioconductor Biostrings package to match strings and the foreach function to parallelize.


library(Biostrings)
library(doParallel)
registerDoParallel(cores = 48)
x=scan("pi-billion.txt",what="c")
x=substr(x,3,nchar(x)) ##remove 3.
x=BString(x)
n<-length(x)
p <- 1/(10^d)
par(mfrow=c(2,3))
for(d in 2:4){
if(d<5){
patterns<-sprintf(paste0("%0",d,"d"),seq(0,10^d-1))
} else{
patterns<-sprintf(paste0("%0",d,"d"),sample(10^d,10^4)-1)
}
res <- foreach(pat=patterns,.combine=c) %dopar% countPattern(pat,x)
z <- (res - n*p ) / sqrt( n*p*(1-p) )
qqnorm(z,xlab="Theoretical quantiles",ylab="Observed z-scores",main=paste(d,"digits"))
abline(0,1)
##correction: original post had length(res)
if(d<5) print(1-pchisq(sum ((res - n*p)^2/(n*p)),length(res)-1))
}
###Now count in segments
d <- 1
m <-10^5
patterns <-sprintf(paste0("%0",d,"d"),seq(0,10^d-1))
res <- foreach(pat=patterns,.combine=cbind) %dopar% {
tmp<-start(matchPattern(pat,x))
tmp2<-floor( (tmp-1)/m)
return(tabulate(tmp2+1,nbins=n/m))
}
##qq-plots
par(mfrow=c(2,5))
p <- 1/(10^d)
for(i in 1:ncol(res)){
z <- (res[,i] - m*p) / sqrt( m*p*(1-p) )
qqnorm(z,xlab="Theoretical quantiles",ylab="Observed z-scores",main=paste(i-1))
abline(0,1)
}
##ACF plots
par(mfrow=c(2,5))
for(i in 1:ncol(res)) acf(res[,i])

NB: A normal number has the above stated property in any base. The examples above a for base 10.

13
Mar

De-weaponizing reproducibility

A couple of weeks ago Roger and I went to a conference on statistical reproducibility held at the National Academy of Sciences. The discussion was pretty wide ranging and I love that the thinking about reproducibility is coming back to statistics. There was pretty widespread support for the idea that prevention is the right way to approach reproducibility.
It turns out I was the last speaker of the whole conference. This is an unenviable position to be in with so many bright folks speaking first as they covered a huge amount of what I wanted to say. My talk focused on three key points:
  1. The tools for reproducibility already exist, the barrier isn't tools
  2. We need to de-weaponize reproducibility
  3. Prevention is the right approach to reproducibility

 

In terms of the first point, tools like iPython, knitr, and Galaxy can be used to all but the absolutely largest analysis reproducible right now.  Our group does this all the time with our papers and so do many others. The problem isn't a lack of tools.

Speaking to point two, I think many people would agree that part of the issue is culture change. One issue that is increasingly concerning to me is the "weaponization" of reproducibility.  I have been noticing is that some of us (like me, my students, other folks at JHU, and lots of particularly junior computational people elsewhere) are trying really hard to be reproducible. Most of the time this results in really positive reactions from the community. But when a co-author of mine and I wrote that paper about the science-wise false discovery rate, one of the discussants used our code (great), improved on it (great), identified a bug (great), and then did his level best to humiliate us both in front of the editor and the general public because of that bug (not so great).

I have seen this happen several times. Most of the time if a paper is reproducible the authors get a pat on the back and their code is either ignored, or used in a positive way. But for high-profile and important problems, people  largely use eproducibility to:
  1.  Impose regulatory hurdles in the short term while people transition to reproducibility. One clear example of this is the Secret Science Reform Act which is a bill that imposes strict reproducibility conditions on all science before it can be used as evidence for regulation.
  2. Humiliate people who aren't good coders or who make mistakes in their code. This is what happened in my paper when I produced reproducible code for my analysis, but has also happened to other people.
  3. Take advantage of people's code to plagiarize/straight up steal work. I have stories about this I'd rather not put on the internet

 

Of the three, I feel like (1) and (2) are the most common. Plagiarism and scooping by theft I think are actually relatively rare based on my own anecdotal experience. But I think that the "weaponization" of reproducibility to block regulation or to humiliate folks who are new to computational sciences is more common than I'd like it to be. Until reproducibility is the standard for everyone - which I think is possible now and will happen as the culture changes -  the people who are the early adopters are at risk of being bludgeoned with their own reproducibility. As a community, if we want widespread reproducibility adoption we have to be ferocious about not allowing this to happen.

03
Mar

The elements of data analytic style - so much for a soft launch

Editor's note: I wrote a book called Elements of Data Analytic Style. Buy it on Leanpub or Amazon! If you buy it on Leanpub, you get all updates (there are likely to be some) for free and you can pay what you want (including zero) but the author would be appreciative if you'd throw a little scratch his way. 

So uh, I was going to soft launch my new book The Elements of Data Analytic Style yesterday. I figured I'd just quietly email my Coursera courses to let them know I created a new reference. It turns out that that wasn't very quiet. First this happened:

 

and sure enough the website was down:

 

Screen Shot 2015-03-02 at 2.14.05 PM

 

 

then overnight it did something like 6,000+ units:

 

whoacoursera

 

 

So lesson learned, there is no soft open with Coursera. Here is the post I was going to write though:

 

### Post I was gonna write

I have been doing data analysis for something like 10 years now (gulp!) and teaching data analysis in person for 6+ years. One of the things we do in my data analysis class at Hopkins is to perform a complete data analysis (from raw data to written report) every couple of weeks. Then I grade each assignment for everything from data cleaning to the written report and reproducibility. I've noticed over the course of teaching this class (and classes online) that there are many common elements of data analytic style that I don't often see in textbooks, or when I do, I see them spread across multiple books.

I've posted on some of these issues in some open source guides I've posted to Github like:

But I decided that it might be useful to have a more complete guide to the "art" part of data analysis. One goal is to summarize in a succinct way the most common difficulties encountered by practicing data analysts. It may be a useful guide for peer reviewers who could refer to section numbers when evaluating manuscripts, for instructors who have to grade data analyses, as a supplementary text for a data analysis class, or just as a useful reference. It is modeled loosely in format and aim on the Elements of Style by William Strunk. Just as with the EoS, both the checklist and my book cover a small fraction of the field of data analysis, but my experience is that once these elements are mastered, data analysts benefit most from hands on experience in their own discipline of application, and that many principles may be non-transferable beyond the basics. But just as with writing, new analysts would do better to follow the rules until they know them well enough to violate them.

The book includes a basic checklist that may be useful as a guide for beginning data analysts or as a rubric for evaluating data analyses. I'm reproducing it here so you can comment/hate/enjoy on it.

 

The data analysis checklist

This checklist provides a condensed look at the information in this book. It can be used as a guide during the process of a data analysis, as a rubric for grading data analysis projects, or as a way to evaluate the quality of a reported data analysis.
I Answering the question

1. Did you specify the type of data analytic question (e.g. exploration, assocation causality) before touching the data?
2. Did you define the metric for success before beginning?
3. Did you understand the context for the question and the scientific or business application?
4. Did you record the experimental design?
5. Did you consider whether the question could be answered with the available data?

II Checking the data

1. Did you plot univariate and multivariate summaries of the data?
2. Did you check for outliers?
3. Did you identify the missing data code?

III Tidying the data

1. Is each variable one column?
2. Is each observation one row?
3. Do different data types appear in each table?
4. Did you record the recipe for moving from raw to tidy data?
5. Did you create a code book?
6. Did you record all parameters, units, and functions applied to the data?

IV Exploratory analysis

1. Did you identify missing values?
2. Did you make univariate plots (histograms, density plots, boxplots)?
3. Did you consider correlations between variables (scatterplots)?
4. Did you check the units of all data points to make sure they are in the right range?
5. Did you try to identify any errors or miscoding of variables?
6. Did you consider plotting on a log scale?
7. Would a scatterplot be more informative?

V Inference

1. Did you identify what large population you are trying to describe?
2. Did you clearly identify the quantities of interest in your model?
3. Did you consider potential confounders?
4. Did you identify and model potential sources of correlation such as measurements over time or space?
5. Did you calculate a measure of uncertainty for each estimate on the scientific scale?

VI Prediction

1. Did you identify in advance your error measure?
2. Did you immediately split your data into training and validation?
3. Did you use cross validation, resampling, or bootstrapping only on the training data?
4. Did you create features using only the training data?
5. Did you estimate parameters only on the training data?
6. Did you fix all features, parameters, and models before applying to the validation data?
7. Did you apply only one final model to the validation data and report the error rate?

VII Causality

1. Did you identify whether your study was randomized?
2. Did you identify potential reasons that causality may not be appropriate such as confounders, missing data, non-ignorable dropout, or unblinded experiments?
2. If not, did you avoid using language that would imply cause and effect?

VIII Written analyses

1. Did you describe the question of interest?
2. Did you describe the data set, experimental design, and question you are answering?
3. Did you specify the type of data analytic question you are answering?
4. Did you specify in clear notation the exact model you are fitting?
5. Did you explain on the scale of interest what each estimate and measure of uncertainty means?
6. Did you report a measure of uncertainty for each estimate on the scientific scale?

IX Figures

1. Does each figure communicate an important piece of information or address a question of interest?
2. Do all your figures include plain language axis labels?
3. Is the font size large enough to read?
4. Does every figure have a detailed caption that explains all axes, legends, and trends in the figure?

X Presentations

1. Did you lead with a brief, understandable to everyone statement of your problem?
2. Did you explain the data, measurement technology, and experimental design before you explained your model?
3. Did you explain the features you will use to model data before you explain the model?
4. Did you make sure all legends and axes were legible from the back of the room?

XI Reproducibility

1. Did you avoid doing calculations manually?
2. Did you create a script that reproduces all your analyses?
3. Did you save the raw and processed versions of your data?
4. Did you record all versions of the software you used to process the data?
5. Did you try to have someone else run your analysis code to confirm they got the same answers?

XI R packages

1. Did you make your package name "Googleable"
2. Did you write unit tests for your functions?
3. Did you write help files for all functions?
4. Did you write a vignette?
5. Did you try to reduce dependencies to actively maintained packages?
6. Have you eliminated all errors and warnings from R CMD CHECK?

 

02
Mar

Advanced Statistics for the Life Sciences MOOC Launches Today

In this four week course we will teach statistical techniques that are commonly used in the analysis of high-throughput data and their corresponding R implementations. In Week 1 we will explain inference in the context of high-throughput data and introduce the concept of error controlling procedures. We will describe the strengths and weakness of the Bonferroni correction, FDR and q-values. We will show how to implement these in cases in which  thousands of tests are conducted, as is typically done with genomics data. In Week 2 we will introduce the concept of mathematical distance and how it is used in exploratory data analysis, clustering, and machine learning. We will describe how techniques such as principal component analysis (PCA) and the singular value decomposition (SVD) can be used for dimension reduction in high dimensional data. During week 3 we will describe confounding, latent variables and factor analysis in the context of high dimensional data and how this relates to batch effects. We will show how to implement methods such as SVA to perform inference on data affected by batch effects. Finally, during week 4 we will show how statistical modeling, and empirical Bayes modeling in particular, are powerful techniques that greatly improve precision in high-throughput data. We will be using R code to explain concepts throughout the course. We will also be using exploratory data analysis and data visualization to motivate the techniques we teach during each week.

18
Feb

Navigating Big Data Careers with a Statistics PhD

Editor's note: This is a guest post by Sherri Rose. She is an Assistant Professor of Biostatistics in the Department of Health Care Policy at Harvard Medical School. Her work focuses on nonparametric estimation, causal inference, and machine learning in health settings. Dr. Rose received her BS in statistics from The George Washington University and her PhD in biostatistics from the University of California, Berkeley, where she coauthored a book on Targeted Learning. She tweets @sherrirose.

A quick scan of the science and technology headlines often yields two words: big data. The amount of information we collect has continued to increase, and this data can be found in varied sectors, ranging from social media to genomics. Claims are made that big data will solve an array of problems, from understanding devastating diseases to predicting political outcomes. There is substantial “big data” hype in the press, as well as business and academic communities, but how do upcoming, current, and recent statistical science PhDs handle the array of training opportunities and career paths in this new era? Undergraduate interest in statistics degrees is exploding, bringing new talent to graduate programs and the post-PhD job pipeline.  Statistics training is diversifying, with students focusing on theory, methods, computation, and applications, or a blending of these areas. A few years ago, Rafa outlined the academic career options for statistics PhDs in two posts, which cover great background material I do not repeat here. The landscape for statistics PhD careers is also changing quickly, with a variety of companies attracting top statistics students in new roles.  As a new faculty member at the intersection of machine learning, causal inference, and health care policy, I've already found myself frequently giving career advice to trainees.  The choices have become much more nuanced than just academia vs. industry vs. government.

So, you find yourself inspired by big data problems and fascinated by statistics. While you are a student, figuring out what you enjoy working on is crucial. This exploration could involve engaging in internship opportunities or collaborating with multiple faculty on different types of projects. Both positive and negative experiences can help you identify your preferences.

Undergraduates may wish to spend a couple months at a Summer Institute for Training in Biostatistics or National Science Foundation Research Experience for Undergraduates. There are also many MOOC options to get a taste of different areas ofstatistics. Selecting a graduate program for PhD study can be a difficult choice, especially when your interests within statistics have yet to be identified, as is often the case for undergraduates. However, if you know that you have interests in software and programming, it can be easy to sort which statistical science PhD programs have a curricular or research focus in this area by looking at department websites. Similarly, if you know you want to work in epidemiologic methods, genomics, or imaging, specific programs are going to jump right to the top as good fits. Getting advice from faculty in your department will be important. Competition for admissions into statistics and biostatistics PhD programs has continued to increase, and most faculty advise applying to as many relevant programs as is reasonable given the demands on your time and finances. If you end up sitting on multiple (funded) offers come April, talking to current students, student alums, and looking at alumni placement can be helpful. Don't hesitate to contact these people, selectively. Most PhD programs genuinely do want you to end up in the place that is best for you, even if it is not with them.

Once you're in a PhD program, internship opportunities for graduate students are listed each year by the American Statistical Association. Your home department may also have ties with local research organizations and companies with openings. Internships can help you identify future positions and the types of environments where you will flourish in your career. Lauren Kunz, a recent PhD graduate in biostatistics from Harvard University, is currently a Statistician at the National Heart, Lung, and Blood Institute (NHLBI) of the National Institutes of Health. Dr. Kunz said, "As a previous summer intern at the NHLBI, I was able to get a feel for the day to day life of a biostatistician at the NHLBI. I found the NHLBI Office of Biostatistical Research to be a collegial, welcoming environment, and I soon learned that NHLBI biostatisticians have the opportunity to work on a variety of projects, very often collaborating with scientists and clinicians. Due to the nature of these collaborations, the biostatisticians are frequently presented with scientifically interesting and important statistical problems. This work often motivates methodological research which in turn has immediate, practical applications. These factors matched well with my interest in collaborative research that is both methodological and applied."

Industry is also enticing to statistics PhDs, particularly those with an applied or computational focus, like Stephanie Sapp and Alyssa Frazee. Dr. Sapp has a PhD in statistics from the University of California, Berkeley, and is currently a Quantitative Analyst at Google. She also completed an internship there the summer before she graduated. In commenting about her choice to join Google, Dr. Sapp said,  "I really enjoy both academic research and seeing my work used in practice.  Working at Google allows me to continue pursuing new and interesting research topics, as well as see my results drive more immediate impact."  Dr. Frazee just finished her PhD in biostatistics at Johns Hopkins University and previously spent a summer exploring her interests in Hacker School.  While she applied to both academic and industry positions, receiving multiple offers, she ultimately chose to go into industry and work for Stripe: "I accepted a tech company's offer for many reasons, one of them being that I really like programming and writing code. There are tons of opportunities to grow as a programmer/engineer at a tech company, but building an academic career on that foundation would be more of a challenge. I'm also excited about seeing my statistical work have more immediate impact. At smaller companies, much of the work done there has visible/tangible bearing on the product. Academic research in statistics is operating a lot closer to the boundaries of what we know and discovering a lot of cool stuff, which means researchers get to try out original ideas more often, but the impact is less immediately tangible. A new method or estimator has to go through a lengthy peer review/publication process and be integrated into the community's body of knowledge, which could take several years, before its impact can be fully observed."  One of Dr. Frazee, Dr. Sapp, and Dr. Kunz's considerations in choosing a job reflects many of those in the early career statistics community: having an impact.

Interest in both developing methods and translating statistical advances into practice is a common theme in the big data statistics world, but not one that always leads to an industry or government career. There are also academic opportunities in statistics, biostatistics, and interdisciplinary departments like my own where your work can have an impact on current science.  The Department of Health Care Policy (HCP) at Harvard Medical School has 5 tenure-track/tenured statistics faculty members, including myself, among a total of about 20 core faculty members. The statistics faculty work on a range of theoretical and methodological problems while collaborating with HCP faculty (health economists, clinician researchers, and sociologists) and leading our own substantive projects in health care policy (e.g., Mass-DAC). I find it to be a unique and exciting combination of roles, and love that the science truly informs my statistical research, giving it broader impact. Since joining the department a year and a half ago, I've worked in many new areas, such as plan payment risk adjustment methodology. I have also applied some of my previous work in machine learning to predicting adverse health outcomes in large datasets. Here, I immediately saw a need for new avenues of statistical research to make the optimal approach based on statistical theory align with an optimal approach in practice. My current research portfolio is diverse; example projects include the development of a double robust estimator for the study of chronic disease, leading an evaluation of a new state-wide health plan initiative, and collaborating with department colleagues on statistical issues in all-payer claims databases, physician prescribing intensification behavior, and predicting readmissions. The larger statistics community at Harvard also affords many opportunities to interact with statistics faculty across the campus, and university-wide junior faculty events have connected me with professors in computer science and engineering. I feel an immense sense of research freedom to pursue my interests at HCP, which was a top priority when I was comparing job offers.

Hadley Wickam, of ggplot2 and Advanced R fame, took on a new role as Chief Scientist at RStudio in 2013. Freedom was also a key component in his choice to move sectors: "For me, the driving motivation is freedom: I know what I want to work on, I just need the freedom (and support) to work on it. It's pretty unusual to find an industry job that has more freedom than academia, but I've been noticeably more productive at RStudio because I don't have any meetings, and I can spend large chunks of time devoted to thinking about hard problems. It's not possible for everyone to get that sort of job, but everyone should be thinking about how they can negotiate the freedom to do what makes them happy. I really like the thesis of Cal Newport's book So Good They Can't Ignore You - the better you are at your job, the greater your ability to negotiate for what you want."

There continues to be a strong emphasis in the work force on the vaguely defined field of “data science,” which incorporates the collection, storage, analysis, and interpretation of big data.  Statisticians not only work in and lead teams with other scientists (e.g., clinicians, biologists, computer scientists) to attack big data challenges, but with each other. Your time as a statistics trainee is an amazing opportunity to explore your strengths and preferences, and which sectors and jobs appeal to you. Do your due diligence to figure out which employers are interested in and supportive of the type of career you want to create for yourself. Think about how you want to spend your time, and remember that you're the only person who has to live your life once you get that job. Other people's opinions are great, but your values and instincts matter too. Your definition of "best" doesn't have to match someone else's. Ask questions! Try new things! The potential for breakthroughs with novel flexible methods is strong. Statistical science training has progressed to the point where trainees are armed with thorough knowledge in design, methodology, theory, and, increasingly, data collection, applications, and computation.  Statisticians working in data science are poised to continue making important contributions in all sectors for years to come. Now, you just need to decide where you fit.
13
Feb

Introduction to Linear Models and Matrix Algebra MOOC starts this Monday Feb 16

Matrix algebra is the language of modern data analysis. We use it to develop and describe statistical and machine learning methods, and to code efficiently in languages such as R, matlab and python. Concepts such as principal component analysis (PCA) are best described with matrix algebra. It is particularly useful to describe linear models.

Linear models are everywhere in data analysis. ANOVA, linear regression, limma, edgeR, DEseq, most smoothing techniques, and batch correction methods such as SVA and Combat are based on linear models. In this two week MOOC we well describe the basics of matrix algebra, demonstrate how linear models are used in the life sciences and show how to implement these efficiently in R.

Update: Here is the link to the class

12
Feb

Is Reproducibility as Effective as Disclosure? Let's Hope Not.

Jeff and I just this week published a commentary in the Proceedings of the National Academy of Sciences on our latest thinking on reproducible research and its ability to solve the reproducibility/replication "crisis" in science (there's a version on arXiv too). In a nutshell, we believe reproducibility (making data and code available so that others can recompute your results) is an essential part of science, but it is not going to end the crisis of confidence in science. In fact, I don't think it'll even make a dent. The problem is that reproducibility, as a tool for preventing poor research, comes in at the wrong stage of the research process (the end). While requiring reproducibility may deter people from committing outright fraud (a small group), it won't stop people who just don't know what they're doing with respect to data analysis (a much larger group).

In an eerie coincidence, Jesse Eisinger of the investigative journalism non-profit ProPublica, has just published a piece on the New York Times Dealbook site discussing how requiring disclosure rules in the financial industry has produced meager results. He writes

Over the last century, disclosure and transparency have become our regulatory crutch, the answer to every vexing problem. We require corporations and government to release reams of information on food, medicine, household products, consumer financial tools, campaign finance and crime statistics. We have a booming “report card” industry for a range of services, including hospitals, public schools and restaurants.

The rationale for all this disclosure is that

someone, somewhere reads the fine print in these contracts and keeps corporations honest. It turns out what we laymen intuit is true: No one reads them, according to research by a New York University law professor, Florencia Marotta-Wurgler.

But disclosure is nevertheless popular because how could you be against it?

The disclosure bonanza is easy to explain. Nobody is against it. It’s politically expedient. Companies prefer such rules, especially in lieu of actual regulations that would curtail bad products or behavior. The opacity lobby — the remora fish class of lawyers, lobbyists and consultants in New York and Washington — knows that disclosure requirements are no bar to dodgy practices. You just have to explain what you’re doing in sufficiently incomprehensible language, a task that earns those lawyers a hefty fee.

In the now infamous Duke Saga, Keith Baggerly was able to reproduce the work of Potti et al. after roughly 2,000 hours of work because the data were publicly available (although the code was not). It's not clear how much time would have been saved if the code had been available, but it seems reasonable to assume that it would have taken some amount of time to understand the analysis, if not reproduce it. Once the errors in Potti's work were discovered, it took 5 years for the original Nature Medicine paper to be retracted.

Although you could argue that the process worked in some sense, it came at tremendous cost of time and money. Wouldn't it have been better if the analysis had been done right in the first place?

09
Feb

The trouble with evaluating anything

It is very hard to evaluate people's productivity or work in any meaningful way. This problem is the source of:

  1. Consternation about peer review
  2. The reason why post publication peer review doesn't work
  3. Consternation about faculty evaluation
  4. Major problems at companies like Yahoo and Microsoft.

Roger and I were just talking about this problem in the context of evaluating the impact of software as a faculty member and Roger suggested the problem is that:

Evaluating people requires real work and so people are always looking for shortcuts

To evaluate a person's work or their productivity requires three things:

  1. To be an expert in what they do
  2. To have absolutely no reason to care whether they succeed or not
  3. To have time available to evaluate them

These three fundamental things are at the heart of why it is so hard to get good evaluations of people and why peer review and other systems are under such fire. The main source of the problem is the conflict between 1 and 2. The group of people in any organization or on any scale that is truly world class at any given topic from software engineering to history is small. It has to be by definition. This group of people inevitably has some reason to care about the success of the other people in that same group. Either they work with the other world class people and want them to succeed or they  either intentionally or unintentionally are competing with them.

The conflict between being and expert and having no say wouldn't be such a problem if it wasn't for issue number 3: the time to evaluate people. To truly get good evaluations what you need is for someone who isn't an expert in a field and so has no stake to take the time to become an expert and then evaluate the person/software. But this requires a huge amount of effort on the part of a reviewer who has to become expert in a new field. Given that reviewing is often considered the least important task in people's workflow, evidenced by the value we put on people acting as peer reviewers for journals, or the value people get for doing a good job in people's evaluation for promotion in companies, it is no wonder people don't take the time to become experts.

I actually think that tenure review committees at forward thinking places may be the best at this (Rafa said the same thing about NIH study section). They at least attempt to get outside reviews from people who are unbiased about the work that a faculty member is doing before they are promoted. This system, of course, has large and well-document problems, but I think it is better than having a person's direct supervisor - who clearly has a stake - being the only person evaluating them.It is also better than only using the quantifiable metrics like number of papers and impact factor of the corresponding journals. I also think that most senior faculty who evaluate people take the job very seriously despite the only incentive being good citizenship.

Since real evaluation requires hard work and expertise, most of the time people are looking for a short cut. These short cuts typically take the form of quantifiable metrics. In the academic world these shortcuts are things like:

  1. Number of papers
  2. Citations to academic papers
  3. The impact factor of a journal
  4. Downloads to a person's software

I think all of these things are associated with quality but none define quality. You could try to model the relationship, but it is very hard to come up with a universal definition for the outcome you are trying to model. In academics, some people have suggested that open review or post-publication review solves the problem. But this is only true for a very small subset of cases that violate rule number 2. The only papers that get serious post-publication review are where people have an incentive for the paper to go one way or the other. This means that papers in Science will be post-pub reviewed much much more often than equally important papers in discipline specific journals - just because people care more about Science. This will leave the vast majority of papers unreviewed - as evidenced by the relatively modest number of papers reviewed by PubPeer or Pubmed Commons.

I'm beginning to think that the only way to do evaluation well is to hire people whose only job is to evaluate something well. In other words, peer reviewers who are paid to review papers full time and are only measured by how often those papers are retracted or proved false. Or tenure reviewers who are paid exclusively to evaluate tenure cases and are measured by how well the post-tenure process goes for the people they evaluate and whether there is any measurable bias in their reviews.

The trouble with evaluating anything is that it is hard work and right now we aren't paying anyone to do it.

 

05
Feb

Johns Hopkins Data Science Specialization Top Performers

Editor's note: The Johns Hopkins Data Science Specialization is the largest data science program in the world.  Brian, Roger, and myself  conceived the program at the beginning of January 2014 , then built, recorded, and launched the classes starting in April 2014 with the help of Ira.  Since April 2014 we have enrolled 1.76 million student and awarded 71,589 Signature Track verified certificates. The first capstone class ran in October - just 7 months after the first classes launched and 4 months after all classes were running. Despite this incredibly short time frame 917 students finished all 9 classes and enrolled in the Capstone Course. 478 successfully completed the course.

When we first announced the the Data Science Specialization, we said that the top performers would be profiled here on Simply Statistics. Well, that time has come, and we've got a very impressive group of participants that we want to highlight. These folks have successfully completed all nine MOOCs in the specialization and earned top marks in our first capstone session with SwiftKey. We had the pleasure of meeting some of them last week in a video conference, and we were struck by their insights and expertise. Check them out below.

Sasa Bogdanovic

Sasa-Bogdanovic

 

 

 

 

Sasa Bogdanovic is passionate about everything data. For the last 6 years, he's been working in the iGaming industry, providing data products (integrations, data warehouse architectures and models, business intelligence tools, analyst reports and visualizations) for clients, helping them make better, data-driven, business decisions.

Why did you take the JHU Data Science Specialization?

Although I've been working with data for many years, I wanted to take a different perspective and learn more about data science concepts and get insights into the whole pipeline from acquiring data to developing final data products. I also wanted to learn more about statistical models and machine learning.

What are you most proud of doing as part of the JHU Data Science Specialization?

I am very happy to have discovered the data science field. It is a whole new world that I find fascinating and inspiring to explore. I am looking forward to my new career in data science. This will allow me to combine all my previous knowledge and experience with my new insights and methods. I am very proud of every single quiz, assignment and project. For sure, the capstone project was a culmination, and I am very proud and happy to have succeeded to make a solid data product and to be a one of the top performers in the group. For this I am very grateful to the instructors, community TAs, all other peers for their contributions in the forums, and Coursera for putting it all together and making it possible.

How are you planning on using your Data Science Specialization Certificate?

I have already put the certificate in motion. My company is preparing new projects, and I expect the certificate to add weight to our proposals.

Alejandro Morales Gallardo

 

Alejandro

 

 

 

 

I’m a trained physicist with strong coding skills. I have a passion for dissecting datasets to find the hidden stories in data and produce insights through creative visualizations. A hackathon and open-data aficionado, I have an interest in using data (and science) to improve our lives.

Why did you take the JHU Data Science Specialization?

I wanted to close a gap in my skills and transition into to becoming a full blown Data Scientist by learning key concepts and practices in the field. Learning R, an industry relevant language, while creating a portfolio to showcase my abilities in the entire data science pipeline seemed very attractive.

What are you most proud of doing as part of the JHU Data Science Specialization?

I'm most proud of the Predictive Text App I developed. With the Capstone Project, it was extremely rewarding to be able to tackle a brand new data type and learn about text mining and natural language processing while building a fun and attractive data product. I was particularly proud that the accuracy of my app was not that far off from SwiftKey smartphone app. I'm also proud of being a top performer!

How are you planning on using your Data Science Specialization Certificate?

I want to apply my new set of skills to develop other products, analyze new datasets and keep growing my portfolio. It is also helpful to have Verified Certificates to show prospective employers.

Nitin Gupta

 

NitinGupta

 

 

 

 

Nitin is an independent trader and quant strategist with over 13 years of multi-faceted experience in the investment management industry. In the past he worked for a leading investment management firm where he built automated trading and risk management systems and gained complete life-cycle expertise in creating systematic investment products. He has a background in computer science with a strong interest in machine learning and its applications in quantitative modeling.

Why did you take the JHU Data Science Specialization?

I was fortunate to have done the first Machine Learning course taught by Prof. Andrew Ng at the launch of Coursera in 2012, which really piqued my interest in the topic. The next course I did on Coursera was Prof. Roger Peng's Computing For Data Analysis which introduced me to R. I realized that R was ideally suited for the quantitative modeling work I was doing. When I learned about the range of topics that the JHU DSS would cover - from the best practices in tidying and transforming data to modeling, analysis and visualization - I did not hesitate to sign up. Learning how to do all of this in an ecosystem built around R has been a huge plus.

What are you most proud of doing as part of the JHU Data Science Specialization?

I am quite pleased with the web apps I built which utilize the concepts learned during the track. One of my apps visualizes and compares historical stock performance with other stocks and market benchmarks after querying the data directly from web resources. Another one showcases a predictive typing engine that dynamically predicts the next few words to use and append, as the user types a sentence. The process of building these apps provided a fantastic learning experience. Also, for the first time I built something that even my near and dear ones could use and appreciate, which is terrific.

How are you planning on using your Data Science Specialization Certificate?

The broad skill set developed through this specialization could be applied across multiple domains. My current focus is on building robust quantitative models for systematic trading strategies that could learn and adapt to changing market environments. This would involve the application of machine learning techniques among other skills learned during the specialization. Using R and Shiny to interactively analyze the results would be tremendously useful.

Marc Kreyer

 

Marc Kreyer

 

 

 

 

Marc Kreyer is an expert business analyst and software engineer with extensive experience in financial services in Austria and Liechtenstein. He successfully finishes complex projects by not only using broad IT knowledge but also outstanding comprehension of business needs. Marc loves combining his programming and database skills with his affinity for mathematics to transform data into insight.

Why did you take the JHU Data Science Specialization?

There are many data science MOOCs, but usually they are independent 4-6 week courses. The JHU Data Science Specialization was the first offering of a series of courses that build upon each other.

What are you most proud of doing as part of the JHU Data Science Specialization?

Creating a working text prediction app without any prior NLP knowledge and only minimal assistance from instructors.

How are you planning on using your Data Science Specialization Certificate?

Knowledge and experience are the most valuable things gained from the Data Science Specialization. As they can't be easily shown to future employers, the certificate can be a good indicator for them. Unfortunately there is neither an issue data nor a verification link on the certificate, therefore it will be interesting to see how valuable it really will be.

Hsing Liu

 

Paul_HsingLiu

I studied in the U.S. for a number of years, and received my M.S. in mathematics from NYU before returning to my home country, Taiwan. I'm most interested in how people think and learn, and education in general. This year I'm starting a new career as an iOS app engineer.

Why did you take the JHU Data Science Specialization?

In my brief past job as an instructional designer, I read a lot about the new wave of online education, and was especially intrigued by how Khan Academy's data science division is using data to help students learn. It occurred to me that to leverage my math background and make a bigger impact in education (or otherwise), data science could be an exciting direction to take.

What are you most proud of doing as part of the JHU Data Science Specialization?

It may sound boring, but I'm proud of having done my best for each course in the track, going beyond the bare requirements when I'm able. The parts of the Specialization fit into a coherent picture of the discipline, and I'm glad to have put in the effort to connect the dots and gained a new perspective.

How are you planning on using your Data Science Specialization Certificate?

I'm listing the certificate on my resume and LinkedIn, and I expect to be applying what I've learned once my company's e-commence app launch.

Yichen Liu

 

Yichen Liu is a business analyst at Toyota Western Australia where he is responsible for business intelligence development, data analytics and business improvement. His prior experience includes working as a sessional lecturer and tutor at Curtin University in finance and econometrics units.

Why did you take the JHU Data Science Specialization?

Recognising the trend that the world is more data-driven than before, I felt it was necessary to gain further understanding in data analysis to tackle both current and future challenges at work.

What are you most proud of doing as part of the JHU Data Science Specialization?

The most proud thing as part of the program is that I have gained some basic knowledge in a totally new area, natural language processing. Though its connection with my current working area is limited, I see the future of data analysis to be more unstructured-data-drive and am willing to develop more knowledge in this area.

How are you planning on using your Data Science Specialization Certificate?

I see the certificate as a stepping stone into the data science world, and would like to conduct more advanced studies in data science especially for unstructured data analysis.

Johann Posch

PictureJohannPosch

After graduating form Vienna University of Technology with a specialization in Artificial Intelligence I joined Microsoft. There I worked as a developer on various products but the majority of the time as a Windows OS developer. After venturing into start-ups for a few years I joined GE Research to work on the Predix Big Data Platform and recently I joined on the Industrial Data Science team.

Why did you take the JHU Data Science Specialization?

Ever since I wrote my masters thesis in Neural Networks I have been intrigued with machine learning. I see data science as a field where great advances will happen over the next decade and as an opportunity to positively impact millions of lives. I like how JHU structured the course series.

What are you most proud of doing as part of the JHU Data Science Specialization?

Being able to complete the JHU Data Science Specialization in 6 months and to get an distinction on every one of the courses was a great success. However, the best moment was probably the way my capstone project (next word prediction) turned out. The model could be trained in incremental steps and how it was able to provide meaningful options in real time.

How are you planning on using your Data Science Specialization Certificate?

The course covered the concepts and tools needed to successfully address data science problems. It gave me the confidence and knowledge to apply for data science position. I am now working in the field at GE Research. I am grateful to all who made this Specialization happen!

Jason Wilkinson

 

JasonWilkinson

 

 

 

 

Jason Wilkinson is a trader of commodity futures and other financial securities at a small proprietary trading firm in New York City. He and his wife, Katie, and dog, Charlie, can frequently be seen at the Jersey shore. And no, it's nothing like the tv show, aside from the fist pumping.

Why did you take the JHU Data Science Specialization?

The JHU Data Science Specialization helped me to prepare as I begin working on a Masters of Computer Science specializing in Machine Learning at Georgia Tech and also in researching algorithmic trading ideas. I also hope to find ways of using what I've learned in philanthropic endeavors, applying data science for social good.

What are you most proud of doing as part of the JHU Data Science Specialization?

I'm most proud of going from knowing zero R code to being able to apply it in the capstone and other projects in such a short amount of time.

How are you planning on using your Data Science Specialization Certificate?

The knowledge gained in pursuing the specialization certificate alone was worth the time put into it. A certificate is just a piece of paper. It's what you can do with the knowledge gained that counts.

Uli Zellbeck

 

Uli

 

I studied economics in Berlin with focus on econometrics and business informatics. I am currently working as a Business Intelligence / Data Warehouse Developer in an e-commerce company. I am interested in recommender systems and machine learning.

Why did you take the JHU Data Science Specialization?

I wanted to learn about Data Science because it provides a different approach on solving business problems with data. I chose the JHU Data Science Specialization on Coursera because it promised a wide range of topics and I like the idea of online courses. Also, I had experience with R and I wanted to deepen my knowledge with this tool.

What are you most proud of doing as part of the JHU Data Science Specialization?

There are two things. I successfully took all nine courses in 4 months and the capstone project was really hard work.

How are you planning on using your Data Science Specialization Certificate?

I might get the chance to develop a Data Science department at my company. I like to use the certificate as basis to get a deeper knowledge in the many parts of Data Science.

Fred Zheng Zhenhao

 

ZHENG Zhenhao

 

 

 

 

By the time I enrolled in the JHU data science specialization, I was an undergraduate student in The Hong Kong Polytechnic university. Before that, I read some data mining books, feel excited about the content, but I never get to implement any of the algorithms because I barely have any programming skill. After taking this series of courses, now I am able to analyze the web content which is related to my research using R.

Why did you take the JHU Data Science Specialization?

I took this series of courses as a challenge to me. I would like to see whether my interest can support me through 9 courses and 1 capstone project. And I do want to learn more in this field. This specialization is different from other data mining or machine learning class in that it covers the entire process including the Git, R, R-Markdown, shiny etc, and I think these are necessary skills too.

What are you most proud of doing as part of the JHU Data Science Specialization?

Getting my word prediction app to respond in 0.05 seconds is already exiting, and one of the reviewer says "congratulations your engine came up with the most correct prediction among those I reviewed: 3 out of 5, including one that stumped every one else : "child might stick her finger or a foreign object into an electrical (outlet)". I guess that's the part I am most proud of.

How are you planning on using your Data Science Specialization Certificate?

It definitely goes in my CV for future job hunting.

 

 

05
Feb

Early data on knowledge units - atoms of statistical education

Yesterday I posted about atomizing statistical education into knowledge units. You can try out the first knowledge unit here: https://jtleek.typeform.com/to/jMPZQe. The early data is in and it is consistent with many of our hypotheses about the future of online education.

Namely:

  1. Completion rates are high when segments are shorter
  2. You can learn something about statistics in a short amount of time (2 minutes to complete, many people got all questions right)
  3. People will consume educational material on tablets/smartphones more and more.

Screen Shot 2015-02-05 at 9.34.51 AM