Simply Statistics

05
Dec

Email is a to-do list made by other people - can someone make it more efficient?!

This is a follow-up to one of our most popular posts: getting email responses from busy people. This post had been in the drafts for a few weeks, then this morning I saw this quote in our Twitter feed:

Your email inbox is a to-do list created by other people (via)

This is 100% true of my work email and I have to say, because of the way those emails are organized - as conversations rather than a prioritized, organized to-do list - I end up missing really important things or getting to them too late. This is happening to me with increasing enough frequency I feel like I'm starting to cause serious problems for people.

So I am begging someone with way better skills than me to produce software that replaces gmail in the following ways. It is a to-do list that I can allow people to add tasks too. The software shows me the following types of messages.

  1. We have an appointment at x time on y date to discuss z. Next to this message is a checkbox. If I click “ok” it gets added to my calendar, if I click “no” then a message gets sent to the person who scheduled the meeting saying I’m unavailable.
  2. A multiple choice question where they input the categories of answer I can give and I just pick one, it sends them the response.
  3. A request to be added as a person who can assign me tasks with a yes/no answer.
  4. A longer request email - this has three entry fields: (1) what do you want, (2) when do you want it by? and (3) a yes/no checkbox asking if I’m willing to perform the task.  If I say yes, it gets added to my calendar with automated reminders.
  5. It should interface with all the systems that send me reminder emails to organize the reminders.
  6. You can assign quotas to people, where they can only submit a certain number of tasks per month.
  7. It allows you to re-assign tasks to other people so when I am not the right person to ask, I can quickly move the task on to the right person.
  8. It would collect data and generate automated reports for me about what kind of tasks I'm usually forgetting/being late on and what times of day I'm bad about responding so that I could improve my response times.

The software would automatically reorganize events/to-dos to reflect changing deadlines/priorities, etc. This piece of software would revolutionize my life. Any takers?

09
Nov

Interview with Tom Louis - New Chief Scientist at the Census Bureau

Tom Louis


Tom Louis is a professor of Biostatistics at Johns Hopkins and will be joining the Census Bureau through an interagency personnel agreement as the new associate director for research and methodology and chief scientist. Tom has an impressive history of accomplishment in developing statistical methods for everything from environmental science to genomics. We talked to Tom about his new role at the Census, how it relates to his impressive research career, and how young statisticians can get involved in the statistical work at the Census. 


SS: How did you end up being invited to lead the research branch of the Census?

TL: Last winter, then-director Robert Groves (now Provost at Georgetown University) asked if I would be interested in  the possibility of becoming the next Associate Director of Research and Methodology (R&M) and Chief Scientist, succeeding  Rod Little (Professor of Biostatistics at the University of Michigan) in these roles.  I expressed interest and after several discussions with Bob and Rod, decided that if offered, I would accept.  It was offered and I did accept.  

As background,  components of my research, especially Bayesian methods, is Census-relevant.  Furthermore, during my time as a member of the National Academies Committee on National Statistics I served on the panel that recommended improvements in small area income and poverty estimates, chaired the panel that evaluated methods for allocating federal and state program funds by formula, and chaired a workshop on facilitating innovation in the Federal statistical system.

Rod and I noted that it’s interesting and possibly not coincidental that with my appointment the first two associate directors are both former chairs of Biostatistics departments.  It is the case that R&D’s mission is quite similar to that of a Biostatistics department; methods and collaborative research, consultation and education.  And, there are many statisticians at the Census Bureau who are not in the R&D directorship, a sociology quite similar to that in a School of Public Health or a Medical campus. 

SS: What made you interested in taking on this major new responsibility?

TL: I became energized by the opportunity for national service, and excited by the scientific, administrative, and sociological responsibilities and challenges.  I’ll be engaged in hiring and staff development, and increasing the visibility of the bureau’s pre- and post-doctoral programs.  The position will provide the impetus to take a deep dive into finite-population statistical approaches, and contribute to the evolving understanding of the strengths and weakness of design-based, model-based and hybrid approaches to inference.  That I could remain a Hopkins employee by working via an Interagency Personnel Agreement, sealed the deal.  I will start in January 2013 and serve through 2015, and will continue to participate in some Hopkins-based activities.

In addition to activities within the Census Bureau, I’ll be increasing connections among statisticians in other federal statistical agencies, have a role in relations with researchers funded through the NSF to conduct census-related research.

SS: What are the sorts of research projects the Census is involved in? 

TL: The Census Bureau designs and conducts the decennial Census, the Current Population Survey, the American Community Survey, many, many other surveys for other Federal Statistical Agencies including the Bureau of Labor Statistics, and a quite extraordinary portfolio of others. Each identifies issues in design and analysis that merit attention, many entail “Big Data” and many require combining information from a variety of sources.  I give a few examples, and encourage exploration of www.census.gov/research.

You can get a flavor of the types of research from the titles of the six current centers within R&M: The Center for Adaptive Design, The Center for Administrative Records Research and Acquisition, The Center for Disclosure Avoidance Research, The Center for Economic Studies, The Center for Statistical Research and Methodology and The Center for Survey Measurement.  Projects include multi-mode survey approaches, stopping rules for household visits, methods of combining information from surveys and administrative records, provision of focused estimates while preserving identity protection,  improved small area estimates of income and of limited english skills (used to trigger provision of election ballots in languages other than English), and continuing investigation of issues related to model-based and design-based inferences.

 
SS: Are those projects related to your research?

TL: Some are, some will be, some will never be.  Small area estimation, hierarchical modeling with a Bayesian formalism, some aspects of adaptive design, some of combining evidence from a variety of sources, and general statistical modeling are in my power zone.  I look forward to getting involved in these and contributing to other projects.

SS: How does research performed at the Census help the American Public?

TL: Research innovations enable the bureau to produce more timely and accurate information at lower cost, improve validity (for example, new approaches have at least maintained respondent participation in surveys), enhancing the reputation of the the Census Bureau as a trusted source of information.  Estimates developed by Census are used to allocate billions of dollars in school aid, and the provide key planning information for businesses and governments.

SS: How can young statisticians get more involved in government statistical research?

TL: The first step is to become aware of the wide variety of activities and their high impact.  Visiting the Census website and those of other federal and state agencies, and the Committee on National Statistics (http://sites.nationalacademies.org/DBASSE/CNSTAT/) and the National Institute of Statistical Sciences (http://www.niss.org/) is a good start.   Make contact with researchers at the JSM and other meetings and be on the lookout for pre- and post-doctoral positions at Census and other federal agencies.

08
Nov

Some academic thoughts on the poll aggregators

The night of the presidential elections I wrote a post celebrating the victory of data over punditry. I was motivated by the personal attacks made against Nate Silver by pundits that do not understand Statistics. The post generated a little bit of (justified) nerdrage (see comment section). So here I clarify a couple of things not as a member of Nate Silver’s fan club (my mancrush started with PECOTA not fivethirtyeight) but as an applied statistician.

The main reason fivethrityeight predicts election results so well is mainly due to the idea of averaging polls. This idea was around way before fivethirtyeight started. In fact, it’s a version of meta-analysis which has been around for hundreds of years and is commonly used to improve results of clinical trials. This election cycle several groups,  including Sam Wang (Princeton Election Consortium), Simon Jackman (pollster), and Drew Linzer (VOTAMATIC), predicted the election perfectly using this trick. 

While each group adds their own set of bells and whistles, most of the gains come from the aggregation of polls and understanding the concept of a standard error. Note that while each individual poll may be a bit biased, historical data shows that these biases average out to 0. So by taking the average you obtain a close to unbiased estimate. Because there are so many pollsters, each one conducting several polls, you can also estimate the standard error of your estimate pretty well (empirically rather than theoretically).  I include a plot below that provides evidence that bias is not an issue and that standard errors are well estimated. The dash line is at +/- 2 standard erros based on the average (across all states) standard error reported by fivethirtyeight. Note that the variability is smaller for the battleground states where more polls were conducted (this is consistent with state-specific standard error reported by fivethirtyeight).

Finally, there is the issue of the use of the word “probability”. Obviously one can correctly state that there is a 90% chance of observing event A and then have it not happen: Romney could have won and the aggregators still been “right”. Also frequentists complain when we talk about the probability of something that only will happen once? I actually don’t like getting into this philosophical discussion (Gelman has some thoughts worth reading) and I cut people who write for the masses some slack. If the aggregators consistently outperform the pundits in their predictions I have no problem with them using the word “probability” in their reports. I look forward to some of the post-election analysis of all this.

07
Nov

Nate Silver does it again! Will pundits finally accept defeat?

My favorite statistician did it again! Just like in 2008, he predicted the presidential election results almost perfectly. For those that don’t know, Nate Silver is the statistician that runs the fivethirtyeight blog. He combines data from hundreds of polls, uses historical data to weigh them appropriately and then uses a statistical model to run simulations and predict outcomes.

While the pundits were claiming the race was a “dead heat”, the day before the election Nate gave Obama a 90% chance of winning. Several pundits attacked Nate (some attacks were personal) for his predictions and demonstrated their ignorance of Statistics. Jeff wrote a nice post on this. The plot below demonstrates how great Nate’s prediction was. Note that each of the 45 states (including DC) for which he predicted a 90% probability or higher of winning for candidate A, candidate A won. For the other 6 states the range of percentages was 48-52%. If Florida goes for Obama he will have predicted every single state correctly.

UpdateCongratulations also to Sam Wang (Princeton Election Consortium) and Simon Jackman (pollster) that also called the election perfectly. And thanks to the pollsters that provided the unbiased (on average) data used by all these folks. Data analysts won “experts” lost.

Update 2: New plot with data from here. Old graph here.

05
Nov

If we truly want to foster collaboration, we need to rethink the "independence" criteria during promotion

When I talk about collaborative work, I don’t mean spending a day or two helping compute some p-values and end up as middle author in a subject-matter paper. I mean spending months working on a project, from start to finish, with experts from other disciplines to accomplish a goal that can only be accomplished with a diverse team. Many papers in genomics are like this (the ENOCDE  and 1000 genomes papers for example). Investigators A dreams up the biology, B develops the technology, C codes up algorithms to deal with massive data, while D analyzes the data and assess uncertainty, with the results reported in one high profile paper. I illustrate the point with genomics because it’s what I know best, but examples abound in other specialties as well. 

Fostering collaborative research seems to be a priority for most higher education institutions. Both funding agencies and universities are creating initiative after initiative to incentivize team science. But at the same time the appointments and promotions process rewards researchers that have demonstrated “independence”. If we are not careful it may seem like we are sending mixed signals. I know of young investigators that have been advised to set time aside to demonstrate independence by publishing papers without their regular collaborators. This advice assumes that one can easily balance collaborative and independent research. But here is the problem: truly collaborative work can take just as much time and intellectual energy as independent research, perhaps more. Because time is limited, we might inadvertently be hindering the team science we are supposed to be fostering. Time spent demonstrating independence is time not spent working on the next high impact project.

I understand the argument for striving to hire and promote scholars that can excel no matter the context. But I also think it is unrealistic to compete in team science if we don’t find a better way to promote those that excel in collaborative research as well. It is a mistake to think that scholars that excel in solo research can easily succeed in team science. In fact, I have seen several examples of specializations, that are important to the university, in which the best work is being produced by a small team.  At the same time, “independent” researchers all over the country are also working in these areas and publishing just as many papers. But the influential work is coming almost exclusively from the team. Whom should your university hire and promote in this particular area? To me it seems clear that it is the team. But for them to succeed we can’t get in their way by requiring each individual member to demonstrate “independence” in the traditional sense.

 

 

04
Nov

Sunday Data/Statistics Link Roundup (11/4/12)

  1. Brian Caffo headlines the WaPo article about massive online open courses. He is the driving force behind our department’s involvement in offering these massive courses. I think this sums it up: `“I can’t use another word than unbelievable,” Caffo said. Then he found some more: “Crazy . . . surreal . . . heartwarming.”’
  2. A really interesting discussion of why “A Bet is a Tax on B.S.”. It nicely describes why intelligent betters must be disinterested in the outcome, otherwise they will end up losing money. The Nate Silver controversy just doesn’t seem to be going away, good news for his readership numbers, I bet. (via Rafa)
  3. An interesting article on how scientists are not claiming global warming is the sole cause of the extreme weather events we are seeing, but that it does contribute to them being more extreme. The key quote: “We can’t say that steroids caused any one home run by Barry Bonds, but steroids sure helped him hit more and hit them farther. Now we have weather on steroids.” —Eric Pooley. (via Roger)
  4. The NIGMS is looking for a Biomedical technology, Bioinformatics, and Computational Biology Director. I hope that it is someone who understands statistics! (via Karl B.)
  5. Here is another article that appears to misunderstand statistical prediction.  This one is about the Italian scientists who were jailed for failing to predict an earthquake. No joke. 
  6. We talk a lot about how much the data revolution will change industries from social media to healthcare. But here is an important reality check. Patients are not showing an interest in accessing their health care data. I wonder if part of the reason is that we haven’t come up with the right ways to explain, understand, and utilize what is inherently stochastic and uncertain information. 
  7. The BMJ is now going to require all data from clinical trials published in their journal to be public.  This is a brilliant, forward thinking move. I hope other journals will follow suit. (via Karen B.R.)
  8. An interesting article about the impact of retractions on citation rates, suggesting that papers in fields close to those of the retracted paper may show negative impact on their citation rates. I haven’t looked it over carefully, but how they control for confounding seems incredibly important in this case. (via Alex N.). 
04
Nov
03
Nov
31
Oct
30
Oct

On weather forecasts, Nate Silver, and the politicization of statistical illiteracy

As you know, we have a thing for statistical literacy here at Simply Stats. So of course this column over at Politico got our attention (via Chris V. and others). The column is an attack on Nate Silver, who has a blog where he tries to predict the outcome of elections in the U.S., you may have heard of it…

The argument that Dylan Byers makes in the Politico column is that Nate Silver is likely to be embarrassed by the outcome of the election if Romney wins. The reason is that Silver’s predictions have suggested Obama has a 75% chance to win the election recently and that number has never dropped below 60% or so. 

I don’t know much about Dylan Byers, but from reading this column and a quick scan of his twitter feed, it appears he doesn’t know much about statistics. Some people have gotten pretty upset at him on Twitter and elsewhere about this fact, but I’d like to take a different approach: education. So Dylan, here is a really simple example that explains how Nate Silver comes up with a number like the 75% chance of victory for Obama. 

Let’s pretend, just to make the example really simple, that if Obama gets greater than 50% of the vote, he will win the election. Obviously, Silver doesn’t ignore the electoral college and all the other complications, but it makes our example simpler. Then assume that based on averaging a bunch of polls  we estimate that Obama is likely to get about 50.5% of the vote.

Now, we want to know what is the “percent chance” Obama will win, taking into account what we know. So let’s run a bunch of “simulated elections” where on average Obama gets 50.5% of the vote, but there is variability because we don’t have the exact number. Since we have a bunch of polls and we averaged them, we can get an estimate for how variable the 50.5% number is. The usual measure of variance is the standard deviation. Say we get a standard deviation of 1% for our estimate. That would be a pretty accurate number, but not totally unreasonable given the amount of polling data out there. 

We can run 1,000 simulated elections like this in R* (a free software programming language, if you don’t know R, may I suggest Roger’s Computing for Data Analysis class?). Here is the code to do that. The last line of code calculates the percent of times, in our 1,000 simulated elections, that Obama wins. This is the number that Nate would report on his site. When I run the code, I get an Obama win 68% of the time (Obama gets greater than 50% of the vote). But if you run it again that number will vary a little, since we simulated elections. 

The interesting thing is that even though we only estimate that Obama leads by about 0.5%, he wins 68% of the simulated elections. The reason is that we are pretty confident in that number, with our standard deviation being so low (1%). But that doesn’t mean that Obama will win 68% of the vote in any of the elections! In fact, here is a histogram of the percent of the vote that Obama wins: 

He never gets more than 54% or so and never less than 47% or so. So it is always a reasonably close election. Silver’s calculations are obviously more complicated, but the basic idea of simulating elections is the same. 

Now, this might seem like a goofy way to come up with a “percent chance” with simulated elections and all. But it turns out it is actually a pretty important thing to know and relevant to those of us on the East Coast right now. It turns out weather forecasts (and projected hurricane paths) are based on the same sort of thing - simulated versions of the weather are run and the “percent chance of rain” is the fraction of times it rains in a particular place. 

So Romney may still win and Obama may lose - and Silver may still get a lot of it right. But regardless, the approach taken by Silver is not based on politics, it is based on statistics. Hopefully we can move away from politicizing statistical illiteracy and toward evaluating the models for the real, underlying assumptions they make. 

* In this case, we could calculate the percent of times Obama would win with a formula (called an analytical calculation) since we have simplified so much. In Nate’s case it is much more complicated, so you have to simulate.